Explore our articles
View All Results

Stefaan Verhulst

Article by Claire Brown: “Zillow, the country’s largest real estate listings site, has quietly removed a feature that showed the risks from extreme weather for more than one million home sale listings on its site.

The website began publishing climate risk ratings last year using data from the risk-modeling company First Street. The scores aimed to quantify each home’s risk from floods, wildfires, wind, extreme heat and poor air quality.

But real estate agents complained they hurt sales. Some homeowners protested the scores and found there was no way to challenge the ratings.

Earlier this month Zillow stopped displaying the scores after complaints from the California Regional Multiple Listing Service, which operates a private database funded by real estate brokers and agents. Zillow relies on that listing service and others around the country for its real estate data. The California listing service, one of the largest in the country, raised concerns about the accuracy of First Street’s flood risk models.“Displaying the probability of a specific home flooding this year or within the next five years can have a significant impact on the perceived desirability of that property,” said Art Carter, California Regional Multiple Listing Service’s chief executive officer.

In a statement, Zillow spokeswoman Claire Carroll said the company remains committed to providing consumers with information that helps them make informed decisions. Real estate listings on Zillow now display hyperlinks to First Street’s website, and users can click through to view climate risk scores for a specific property.

The development highlights a growing tension within the real estate industry. Fires, floods and other disasters are posing more risks to homes as the planet warms, but forecasting exactly which houses are most vulnerable — and might sell for less — has proved fraught.

First Street models have shown that millions more properties are at risk of flooding than government estimates suggest.

Other real estate sites, including Redfin, Realtor.com and Homes.com, display similar First Street data alongside ratings for factors like walkability, public transportation and school quality…(More)”.

Zillow Removes Climate Risk Scores From Home Listings

New Public: “Last June, our Co-Director Eli Pariser laid out a bold new direction for New_ Public in this newsletter: “You may not think of this as social media, but most towns in America have some sort of general-purpose, locals-only digital forum: a Facebook group, a Google group, a Nextdoor neighborhood. These groups mostly run below the radar — they seem quotidian, maybe even boring. But according to Pew, “about half of US adults say they get their local news from online groups or forums,” more even than from newspapers. If they were strengthened into resilient, flourishing spaces, they could be crucial to reinforcing American democracy. That’s why we’re going to make them a major focus of our work at New_ Public.”

Since then, we’ve deeply explored local digital spaces, including Front Porch Forum’s inspiring example of what’s possible, and grim reminders of how the status quo is not sufficient with platforms like Nextdoor….

The main thing to know, maybe the most important thing, is that this is not just another social media app. Roundabout is a community space, built from the ground up with community leaders and neighbors.

We’re so excited to share more about Roundabout in the coming months, and we’re determined to grow and learn in public, along with you. Here’s a preview:

  • Roundabout is for building real relationships with people who actually live, work, and play in your community through trusted information sharing, genuine conversations, and mutual support, not posts competing for likes and virality.
  • Users can quickly find what they’re looking for via content that’s organized into topic-based channels, with a separate calendar to find events and a Guides section to find more evergreen resources.
  • Local stewards are supported and empowered. Each community will have its own vibes and culture.

The big platforms, some several decades old now, were built for profit — every decision, every design, optimized for uncontrolled growth and extraction. We’re doing something different.

As a project incubated within New_ Public, a nonprofit, Roundabout will grow incrementally, sustained by a diverse and balanced set of revenue sources. With business incentives aligned towards utility and everyday value, instead of engagement and relentless scale, we’re designing Roundabout to be shielded from the cycle of enshittificationThe ultimate goal is to build for social trust — every decision, every design, optimized to build bonds and increase belonging…(More)”.

Introducing Roundabout: built for neighbors, with neighbors

Review by James Gleick: “It’s hard to remember—impossible, if you’re under thirty—but there was an Internet before there was a World Wide Web. Experts at their computer keyboards (phones were something else entirely) chatted and emailed and used Unix protocols called “finger” and “gopher” to probe the darkness for pearls of information. Around 1992 people started talking about an “Information Superhighway,” in part because of a national program championed by then senator Al Gore to link computer networks in universities, government, and industry. A highway was different from a web, though. It took time for everyone to catch on.

The Internet was a messy joint effort, but the web had a single inventor: Tim Berners-Lee, a computer programmer at the CERN particle physics laboratory in Geneva. His big idea boiled down to a single word: links. He thought he could organize a free-form world of information by prioritizing interconnections between documents—which could be text, pictures, sound, or anything at all. Suddenly it seemed everyone was talking about webpages and web browsers; people turned away from their television sets and discovered the thrills of surfing the web.

It’s also hard to remember the idealism and ebullience of those days. The world online promised to empower individuals and unleash a wave of creativity. Excitement came in two main varieties. One was a sense of new riches—an abundance, a cornucopia of information goodies. The Library of Congress was “going online” and so was the Louvre. “Click the mouse,” urged the New York Times technology reporter John Markoff:

There’s a NASA weather movie taken from a satellite high over the Pacific Ocean. A few more clicks, and one is reading a speech by President Clinton, as digitally stored at the University of Missouri. Click-click: a sampler of digital music recordings as compiled by MTV. Click again, et voila: a small digital snapshot reveals whether a certain coffee pot in a computer science laboratory at Cambridge University in England is empty or full…(More)”

How the Web Was Lost

The Economist: “A good cover letter marries an applicant’s CV to the demands of the job. It helps employers identify promising candidates, particularly those with an employment history that is orthogonal to their career ambitions. And it serves as a form of signalling, demonstrating that the applicant cares enough about the position to go through a laborious process, rather than simply scrawling their desired salary at the top of a résumé and mass-mailing it to every business in the area.

Or, at least, it used to. The rise of large language models has changed the dynamic. Jobseekers can now produce a perfectly targeted cover letter, touching on all an advertisement’s stated requirements, at the touch of a button. Anyone and everyone can present themselves as a careful, diligent applicant, and do so hundreds of times a day. A new paper by Anaïs Galdin of Dartmouth College and Jesse Silbert of Princeton University uses data from Freelancer.com, a jobs-listing site, to work out what this means for the labour market.

Chart: The Economist

Comparing pre- and post-ChatGPT activity, two results stand out. The first is that cover letters have lengthened. In the pre-LLM era, the median one was 79 words long. (Since Freelancer.com attracts workers for one-off tasks, such letters are more to-the-point than those for full-time roles.) A few years later, post-ChatGPT, the median had risen to 104 words. In 2023 the site introduced its own AI tool, allowing users to craft a proposal without even having to leave the platform. The subset of applications written using the tool—the only ones that can be definitively labelled as AI-generated—are longer still, with a median length of 159 words, more than twice the human-written baseline…(More)”.

How AI is breaking cover letters

Paper by Maria Carmen Lemos et al: “Impacts of climate change, such as flooding, drought, and fires, are already affecting millions of people worldwide. To mitigate and adapt to these impacts, we need climate information that: 1) is usable and used by decision-makers, and 2) is disseminated rapidly and widely, that is, information that can be scaled up. We propose three ways to accelerate usable climate knowledge through the collaboration between scientists and potential users: 1) increasing the number and diversity of people cocreating climate information that they trust and can use, 2) disseminating climate information that can be widely available to many decision-makers (e.g., through the internet), and 3) collaborating with decision-makers that make decisions that affect the public (e.g., water managers, city planners)…(More)”.

Scaling up actionable climate knowledge

Article by Natasha Joshi: “…But what do we stand to lose when we privilege data science over human understanding?

C Thi Nguyen explains this through ‘value capture’. It is the process by which “our deepest values get captured by institutional metrics and then become diluted or twisted as a result. Academics aim at citation rates instead of real understanding; journalists aim for numbers of clicks instead of newsworthiness. In value capture, we outsource our values to large-scale institutions. Then all these impersonal, decontextualizing, de-expertizing filters get imported into our core values. And once we internalize those impersonalized values as our own, we won’t even notice what we’re overlooking.

One such thing being overlooked is care.

Interpersonal caregiving makes no sense from a market lens. The person with power and resources voluntarily expends them to further another person’s well-being and goals. The whole idea of care is oceanic and hard to wrap one’s head around. ‘Head’ being the operative word, because we are trying to understand care with our brains, when it really exists in our bodies and is often performed by our bodies.

Data tools have only inferior ways of measuring care, and by extension designing spaces and society for it.

Outside of specific, entangled relationships of care, humans also have an amorphous ability to feel that they are part of a larger whole. We are affiliated to humanity, the planet, and indeed the universe, and feel it in our bones rather than know it to be true in any objective way.

We see micro-entrepreneurs, inventors, climate stewards, and scores of people, both rich and poor, across circumstances who engage in collective care to make the world a better place. This kind of pro-sociality doesn’t always show in ways that is tangible or immediate or measurable.

Datavism, which we seem to have learned from bazaar, has convinced capital allocators that the impact of social programmes can and should be expressed arithmetically. And, based on those calculations, acts of care can be deemed successful or unsuccessful…(More)”.

Is data failing us?

WEF Paper: “AI is transforming strategic foresight, the field where experts explore plausible futures and develop strategies to help organizations, governments and others prepare for events to come. This paper from the World Economic Forum, in collaboration with the OECD, explores how AI is reshaping this field.

Drawing on insights from 167 foresight experts in 55 countries, it reveals that experts value AI primarily for saving time and streamlining their work by handling repetitive and labour-intensive tasks. However, many respondents express concerns about the quality and trustworthiness of AI-generated content, noting its proclivity to hallucinations, and its limited capacity for inductive reasoning, as it draws from existing knowledge and struggles to embrace the forward-looking perspectives needed for strategic foresight. The paper suggests how to take advantage of AI where it can augment foresight, while limiting its pitfalls…(More)”.

AI in Strategic Foresight: Reshaping Anticipatory Governance

Book by Ryan Calo: “Technology exerts a profound influence on contemporary society, shaping not just the tools we use but the environments in which we live. Law, uniquely among social forces, is positioned to guide and constrain the social fact of technology in the service of human flourishing. Yet, technology has proven disorienting to law: it presents itself as inevitable, makes a shell game of human responsibility, and daunts regulation. Drawing lessons from communities that critically assess emerging technologies, this book challenges the reflexive acceptance of innovation and critiques the widespread belief that technology is inevitable or ungovernable. It calls for a methodical, coherent approach to the legal analysis of technology—one capable of resisting technology’s disorienting qualities—thus equipping law to meet the demands of an increasingly technology-mediated world while helping to unify the field of law and technology itself…(More)”.

Law and Technology: A Methodical Approach

Article by Matt Prewitt: “…Markets have always required some form of protectionist intervention — like intellectual property law — to help foster innovation. In recent years, startups have innovated because of a rich symbiosis with tech giants and their monopoly-seeking investors. Startups are indeed hungry, but their hunger is not to serve consumer needs or the national interest; it is to join the happy ranks of the monopolists. The nature of technological innovation is that competitive markets, without being “managed,” do not inspire it.

Today, this may sound bizarre, heterodox and jarring. But it was once fairly mainstream opinion. In the middle part of the 20th century, many of America’s most celebrated economic minds taught that competitive markets cause technological progress to stagnate. During the neoliberal era that followed, from the 1980s to the 2010s, this idea was largely forgotten and pushed to the margins of politics and academia. But it never lost its kernel of truth

Where to from here? Both sides of the battered American center must first face their mistakes. This will be painful, not only because their errors have resulted in profound and long-term mis-governance, but also because their blind spots are deeply entangled with old, hard-to-kick ideological habits. The center-left’s sunny techno-utopianism traces its roots back to the rationalism of the French Revolution, via Karl Marx and early 20th-century progressives. The center-right’s fervent market fundamentalism is equally a relic of bygone eras, reflecting the thought of Friedrich Hayek and Milton Friedman — idea-warriors who pitched competitive markets as a cure-all largely to one-up the utopian promises of their techno-optimistic progressive foes. Thus, today, center-right and center-left thinking both feel like artifacts from yesterday’s ideological trenches. A new form of centrism that wants to speak to 2026 would need to thoroughly clear the decks…(More)”

The Progress Paradox

Article by Sarah Perez: “In a blog post, the Wikimedia Foundation, the organization that runs the popular online encyclopedia, called on AI developers to use its content “responsibly” by ensuring its contributions are properly attributed and that content is accessed through its paid product, the Wikimedia Enterprise platform.

The opt-in, paid product allows companies to use Wikipedia’s content at scale without “severely taxing Wikipedia’s servers,” the Wikimedia Foundation blog post explains. In addition, the product’s paid nature allows AI companies to support the organization’s nonprofit mission.

While the post doesn’t go so far as to threaten penalties or any sort of legal action for use of its material through scraping, Wikipedia recently noted that AI bots had been scraping its website while trying to appear human. After updating its bot-detection systems, the organization found that its unusually high traffic in May and June had come from AI bots that were trying to “evade detection.” Meanwhile, it said that “human page views” had declined 8% year-over-year.

Now Wikipedia is laying out its guidelines for AI developers and providers, saying that generative AI developers should provide attribution to give credit to the human contributors whose content it uses to create its outputs…(More)”.

Wikipedia urges AI companies to use its paid API, and stop scraping

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday