Explore our articles
View All Results

Stefaan Verhulst

Article by Wycliffe Muia: “Kenya has signed a historic five-year health agreement with the US, the first such pact since Donald Trump’s administration overhauled its foreign aid programme.

The $2.5bn (£1.9bn) deal is aimed at combating infectious diseases in Kenya, with similar agreements expected to be rolled out in other African countries aligned with Trump’s broader foreign policy goals.

The government-to-government deal aims to boost transparency and accountability but has raised fears it could give the US real-time access to critical health databases, including sensitive patient information.

Kenya’s Health Minister Aden Duale sought to allay such fears, saying “only de-identified, aggregated data” would be shared…However, some Kenyans are demanding the disclosure of the full agreement, with fears that it would allow the US to view personal medical records such as the HIV status, TB treatment history, and vaccination data of Kenyan patients.

“What specific data categories are being shared? Are genomic data, disease patterns, mental health data, insurance claims, hospital records, or biometrics included? If not, why is that not explicitly written?” lawyer Willis Otieno posted on X.

Well-known whistle-blower Nelson Amenya voiced similar concerns, urging the Kenyan government to release the full agreement so “we can read it for ourselves”.

Minister Duale has dismissed such fears, insisting that Kenya’s health data remained secure and fully protected by Kenyan laws.

“Your health data is a national strategic asset,” Duale added.

US officials are yet to comment on the data concerns…(More)”.

Kenya signs landmark health deal with US despite data fears

Press Release: “The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the formation of the Agentic AI Foundation (AAIF), and founding contributions of three leading projects driving innovation in open source AI; Anthropic’s Model Context Protocol (MCP), Block’s goose, and OpenAI’s AGENTS.md.

The advent of agentic AI represents a new era of autonomous decision making and coordination across AI systems that will transform and revolutionize entire industries. The AAIF provides a neutral, open foundation to ensure this critical capability evolves transparently, collaboratively, and in ways that advance the adoption of leading open source AI projects. Its inaugural projects, AGENTS.md, goose and MCP, lay the groundwork for a shared ecosystem of tools, standards, and community-driven innovation…

The launch of the AAIF comes just one year after the release of MCP by Anthropic, provider of advanced AI systems grounded in safety research, including Claude and the Claude Developer Platform. MCP has rapidly become the universal standard protocol for connecting AI models to tools, data and applications, with more than 10,000 published MCP servers now covering everything from developer tools to Fortune 500 deployments. The protocol has been adopted by Claude, Cursor, Microsoft Copilot, Gemini, VS Code, ChatGPT and other popular AI platforms, as developers and enterprises gravitate toward its simple integration method, security controls, and faster deployment…(More)”.

Agentic AI Foundation (AAIF)

Conversation between Barrett and Greene and Nicklas Berild Lundblad: “…Q. How do you best use technology to facilitate that learning?

NBL:  The first question you ask before you use technology is “What is the problem I want to solve?” To some degree, this can be a political exercise with great value. You can ask citizens, “What are the ten most pressing problems in our city that we need to solve.”

Once you know that, look for the data; look for the solutions. Look for the different things you need to learn about your city to solve the problems.

Q. In your experience, do governments know the right questions to ask?

NBL: When I worked at Google, one of the things that we said when we went to a government or to a business was “We have this amazing technology. What can it do to answer your questions?” Nine cases out of ten, people would say “We don’t know what our questions are.” 

That’s because modern institutions are not built to generate questions or to encourage curiosity in a way that encourages learning over time.

Q. What are some of the ways in which cities around the world are expanding the kinds of data that will help them learn?

NBL: Sensors provide a whole range of data we never had access to before and they have become measurably cheaper in the last couple of decades and are now biodegradable, so you don’t need to worry about spreading them out. Barcelona and other European cities have been installing sensors but then the question is “What do you want to do with the sensor data?”’ It’s the first question you ask yourself before you use technology. What is the problem you want to solve?

You can have sensors that measure pollution in an area and sensors that measure noise and they are really interesting because they allow you to slowly improve on the general living environment of a city. Sensors that measure movements give us a sense of what the rhythms and flows of the city are. 

For technologists, a super interesting question is what kinds of sensors do you give? What kind would you refrain from giving? A camera is a sensor, but you may not want cameras everywhere as that leads you to a discussion about surveillance…(More)”.

Creating a Learning City

Article by Kevin Starr: “Sometimes a prosaic AI query produces something that looks more like parody. The other day, I asked Claude to give me the latest definition of “impact investing” and it served up this gem:

“There’s ongoing discussion in 2024 about whether the definition should evolve beyond intentionality toward emphasizing real-world change and the additionality of capital.”

The conclusion of that discussion was basically, “Um, no.” And the increasingly obvious reason is that “real-world change” often requires something less than market rates of return. I’m old enough to remember the heady days when impact investing was about “patient capital” and concessionary finance and maybe—gasp—trying to be accountable for impact. Now what we get is “intentionality.”

And sometimes we don’t even get that. Anyone who works in Sub-Saharan Africa has witnessed the steady dwindling of capital that would qualify for even the most generous definition of impact investing. There are continually fewer practitioners, investing continually less money, and trying to mitigate risk with increasingly onerous due diligence. Given the dramatic contraction of Big Aid, we need market-based solutions more than ever, and it’s a really bad time for impact investing to be failing us.

Luckily, I have a solution to the myriad disappointments of impact investing: Get rid of it.

The fundamental problem is that impact investing is neither fish nor fowl. Philanthropists look at the returns impact investors are seeking and think they’re greedy. Investors look at the concessionary deals necessary to drive “real world change” and think they’re dumb. Funders who are at pains to be risk-taking and generous in their grantmaking suddenly become steely-eyed, risk-averse nitpickers when looking at high-impact for-profits. Nobody’s happy.

Impact investing is supposed to occupy a space between philanthropy and commercial investing. Intentions without concessions have left that space mostly empty. This stifles much-needed innovation in poor countries, because the things that make them poor are the same things that make them hard places to start and grow businesses. It’s not going to happen without concessionary finance. “Concessionary” means you’re taking a hit on expected return because you’re serious about impact. That means cheap loans, risky equity positions on generous terms without expectation of higher returns, and even straight-up grants…(More)”

That hit you take in the service of impact has a name. It’s “philanthropy.” It takes a different form than traditional nonprofit grantmaking, but it’s still philanthropy…(More)”.

There Is No Such Thing as Impact Investing

Article by Claire L. Evans: “..Regardless of the medium, memory is survival. As a consequence, biology, with its billions of years of beta-testing in the rearview, has already produced the most powerful storage medium for information in the universe: DNA. Every nucleus of every cell in the human body holds 800 MB of information. In theory, DNA can store up to a billion gigabytes of data per cubic millimeter; with this efficiency, the 180-odd Zettabytes of information our global civilization produces each year would fit in a tennis ball. More importantly, it wouldn’t consume any energy—and it would be preserved for millennia.

This may all sound science-fictional, but over the last decade, technology companies and research institutions have successfully encoded all manner of precious cultural information into the double-helix: the works of Shakespeareall 16GB of Wikipedia, an anthology of biotechnology essays and science fiction stories, the UN Declaration on the Rights of the Child, the Svalbard Global Seed Vault database, the private key of a single bitcoin, and the 1998 album Mezzanine by Massive Attack. Of course, these are PR gimmicks—snazzy proofs of concept for a nascent industry.

Could life, with its onboard resilience against entropic forces, provide a workable solution to the problem of the data center?

But beyond the hype, DNA data storage technology is evolving quickly, and biotech companies have pushed their offerings to the brink of commercial viability. Their approaches are diverse. Catalog, a Boston-based startup, has created a “printer” that can write synthetic DNA directly onto sheets of clear plastic; the French startup Biomemory stores data in credit-card sized “DNA Cards”; Atlas Data Storage, a spinoff of the biotechnology giant Twist Bioscience, encodes data onto synthetic DNA and then dehydrates it into a shelf-stable powder to be reconstituted at will. These propositions should be enticing to anyone tasked with maintaining the integrity of the cloud: plastic sheets, cards, and DNA powder, stashed in metal capsules the size of a AAA battery, don’t require air-conditioning. 

This makes DNA storage the perfect storage medium for what experts call “cold” data: things like municipal and medical records, backups, research data, and archives that don’t need to be accessed on demand (“hot” data, in contrast, is the kind found on Instagram, YouTube, or your banking app). Some 60–80% of all data stored is accessed infrequently enough to be classified as cold, and is currently stored in magnetic tape libraries. Tape, by virtue of its physical nature, is secure and requires minimal power to maintain. But even under perfect environmental conditions, cooled to a precise 20–25°C temperature range, it only lasts for a few decades, and the technology for playing back magnetic tape is likely to go obsolete before the tape itself degrades.

The oldest DNA sample to be successfully read, on the other hand, was over two million years old. And given its importance in the life sciences, it’s not likely we’ll ever forget how to sequence DNA. So long as the relevant metadata—instructions for translating the four-letter code of DNA back into binary—is encoded alongside the data itself, information preserved in DNA will almost certainly outlast the technology companies encoding it. This is why Microsoft, Western Digital, and a small concern of biotech companies cofounded, in 2020, the DNA Data Storage Alliance, an organization to define industry-wide standards for the technology. As with all software, the interoperability of genetic technology will be key to its longevity…(More)”.

Can We Cool Down Data?

Book by Betty Sue Flowers: “Leaders need well-developed foresight because all big decisions are influenced by their story of the future, whether they are aware of it or not. The “official story of the future” is a more or less coherent, more or less conscious, more or less shared narrative about what will happen in 3 months, 6 months, a year, or five years. But as Betty Sue Flowers points out, here’s the weird part: The future is a fiction. It doesn’t exist. Yet you can’t make rational strategic decisions without one.

To manage this, organizations analyze ever-growing volumes of information with increasingly sophisticated analytical techniques and produce forecasts that attempt to predict the future. However, data alone is not enough, and projections are always based on assumptions,  a common one being that things will keep trending as they are now.

When an important decision needs to be made, especially when the people involved in making that decision have opposing ideas about what should happen, it can be challenging to hold a generative dialogue rather than staging a fight. In this context, almost any discussion can immediately devolve into an argument. Scenarios can be very useful in creating a space for dialogue in which people can listen to each other and even help tell the story of a possible future that is not the one they most wish to create.

Flowers emphasizes that scenarios are not intended to be predictions.  Instead, they act as a stage setting for generative dialogues and much better decisions to be made.  By creating a set of different, plausible stories of the future, they are best used to:

  • Create a container for frank, thoughtful, safe, imaginative conversations about how the organization might adapt if trends change.
  • Disrupt assumptions sometimes unconsciously held in current stories.
  • Stimulate more complex and informed stories of the future.
  • Increase foresight and the organization’s ability to adapt, and
  • Set the ground for generative dialogues that improve the organization in the present…(More)”.
Scenarios: Crafting and using stories of the future to change the present

Paper by Jonathan E. LoTempio Jr et al: “The bankruptcy of 23andMe was an inflection point for the direct-to-consumer genetics market. Although the privacy of consumer data has been highlighted by many as a concern, we discuss another key tension in this case: the corporate enclosure of scientific data that has considerable potential value for biomedical research and public health…

When genomic data are collected through explicit, opt-in consent for the express purpose of contributing to biomedical research, they occupy a category that is not easy to classify. Such data are not public resources in the traditional sense, but neither are they simply private commodities. Their value arises through collective participation and through the invocation of public benefit as a condition of the contribution. As such, the successive owners of such data must be legally required to preserve the public benefits and individual expectations associated with their original collection…(More)”.

Impact of the 23andMe bankruptcy on preserving the public benefit of scientific data

Paper by Santiago Cueto, Diether W. Beuermann, Julian Cristia, Ofer Malamud & Francisco Pardo: “This paper examines a large-scale randomized evaluation of the One Laptop Per Child (OLPC) program in 531 Peruvian rural primary schools. We use administrative data on academic performance and grade progression over 10 years to estimate the long-run effects of increased computer access on (i) school performance over time and (ii) students’ educational trajectories. Following schools over time, we find no significant effects on academic performance but some evidence of negative effects on grade progression. Following students over time, we find no significant effects on primary and secondary completion, academic performance in secondary school, or university enrollment. Survey data indicate that computer access significantly improved students’ computer skills but not their cognitive skills; treated teachers received some training but did not improve their digital skills and showed limited use of technology in classrooms, suggesting the need for additional pedagogical support…(More)”.

Laptops in the Long Run: Evidence from the One Laptop per Child Program in Rural Peru

Article by Sam Peters: “How do you teach somebody to read a language if there’s nothing for them to read? This is the problem facing developers across the African continent who are trying to train AI to understand and respond to prompts in local languages.

To train a language model, you need data. For a language like English, the easily accessible articles, books and manuals on the internet give developers a ready supply. But for most of Africa’s languages — of which there are estimated to be between 1,500 and 3,000 — there are few written resources available. Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa, uses the number of available Wikipedia articles to illustrate the amount of available data. For English, there are over 7 million articles. Tigrinya, spoken by around 9 million people in Ethiopia and Eritrea, has 335. For Akan, the most widely spoken native language in Ghana, there are none.

Of those thousands of languages, only 42 are currently supported on a language model. Of Africa’s 23 scripts and alphabets, only three — Latin, Arabic and Ge’Ez (used in the Horn of Africa) — are available. This underdevelopment “comes from a financial standpoint,” says Chinasa T. Okolo, the founder of Technēculturǎ, a research institute working to advance global equity in AI. “Even though there are more Swahili speakers than Finnish speakers, Finland is a better market for companies like Apple and Google.”

If more language models are not developed, the impact across the continent could be dire, Okolo warns. “We’re going to continue to see people locked out of opportunity,” she told CNN. As the continent looks to develop its own AI infrastructure and capabilities, those who do not speak one of these 42 languages risk being left behind…(More)”

Africa has thousands of languages. Can AI be trained on all of them?

Report by the Metagov community: “First, it identifies distinct layers of the AI stack that can be named and reimagined. Second, for each layer, it points to potential strategies, grounded in existing projects, that could steer that layer toward meaningful collective governance.

We understand collective governance as an emergent and context-sensitive practice that makes structures of power accountable to those affected by them. It can take many forms—sometimes highly participatory, and sometimes more representative. It might mean voting on members of a board, proposing a policy, submitting a code improvement, organizing a union, holding a potluck, or many other things. Governance is not only something that humans do; we (and our AIs) are part of broader ecosystems that might be part of governance processes as well. In that sense, a drought caused by AI-accelerated climate change is an input to governance. A bee dance and a village assembly could both be part of AI alignment protocols.

The idea of “points of intervention” here comes from the systems thinker Donella Meadows—especially her essay “Leverage Points: Places to Intervene in a System.” One idea that she stresses there is the power of feedback loops, which is when change in one part of a system produces change in another, and that in turn creates further change in the first, and so on. Collective governance is a way of introducing powerful feedback loops that draw on diverse knowledge and experience.

We recognize that not everyone is comfortable referring to these technologies as “intelligence.” We use the term “AI” most of all because it is now familiar to most people, as a shorthand for a set of technologies that are rapidly growing in adoption and hype. But a fundamental premise of ours is that this technology should enable, inspire, and augment human intelligence, not replace it. The best way to ensure that is to cultivate spaces of creative, collective governance.

These points of intervention do not focus on asserting ethical best practices for AI, or on defining what AI should look like or how it should work. We hope that, in the struggle to cultivate self-governance, healthy norms will evolve and sharpen in ways that we cannot now anticipate. But democracy is an opportunity, never a guarantee…(More)”

Collective Governance for AI: Points of Intervention

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday