Once It Has Been Trained, Who Will Own My Digital Twin?


Article by Todd Carpenter: “Presently, if one ignores the hype around Generative AI systems, we can recognize that software tools are not sentient. Nor can they (yet) overcome the problem of coming up with creative solutions to novel problems. They are limited in what they can do by the training data that they are supplied. They do hold the prospect for making us more efficient and productive, particularly for wrote tasks. But given enough training data, one could consider how much farther this could be taken. In preparation for that future, when it comes to the digital twins, the landscape of the ownership of the intellectual property (IP) behind them is already taking shape.

Several chatbots have been set up to replicate long-dead historical figures so that you can engage with them in their “voice”.  Hellohistory is an AI-driven chatbot that provides people the opportunity to, “have in-depth conversations with history’s greatest.” A different tool, Historical Figures Chat, was widely panned not long after its release in 2023, and especially by historians who strongly objected. There are several variations on this theme of varying quality. Of course, with all things GenAI, they will improve over time and many of the obvious and problematic issues will be resolved either by this generation of companies or the next. Whether there is real value and insight to be gained, apart from the novelty, of engaging with “real historical figures” is the multi-billion dollar question. Much like the World Wide Web in the 1990s, very likely there is value, but it will be years before it can be clearly discerned what that value is and how to capitalize upon it. In anticipation of that day, many organizations are positioning themselves to capture that value.

While many universities have taken a very liberal view of ownership of the intellectual property of their students and faculty — far more liberal than many corporations might — others are quite more restrictive…(More)”.

Philanthropy by the Numbers


Essay by Aaron Horvath: “Foundations make grants conditional on demonstrable results. Charities tout the evidentiary basis of their work. And impact consultants play both sides: assisting funders in their pursuit of rational beneficence and helping grantees translate the jumble of reality into orderly, spreadsheet-ready metrics.

Measurable impact has crept into everyday understandings of charity as well. There’s the extensive (often fawning) news coverage of data-crazed billionaire philanthropists, so-called thought leaders exhorting followers to rethink their contributions to charity, and popular books counseling that intuition and sentiment are poor guides for making the world a better place. Putting ideas into action, charity evaluators promote research-backed listings of the most impactful nonprofits. Why give to your local food bank when there’s one in Somerville, Massachusetts, with a better rating?

Over the past thirty years, amid a larger crisis of civic engagement, social isolation, and political alienation, measurable impact has seeped into our civic imagination and become one of the guiding ideals for public-spirited beneficence. And while its proponents do not always agree on how best to achieve or measure the extent of that impact, they have collectively recast civic engagement as objective, pragmatic, and above the fray of politics—a triumph of the head over the heart. But how did we get here? And what happens to our capacity for meaningful collective action when we think of civic life in such depersonalized and quantified terms?…(More)”.

To Whom Does the World Belong?


Essay by Alexander Hartley: “For an idea of the scale of the prize, it’s worth remembering that 90 percent of recent U.S. economic growth, and 65 percent of the value of its largest 500 companies, is already accounted for by intellectual property. By any estimate, AI will vastly increase the speed and scale at which new intellectual products can be minted. The provision of AI services themselves is estimated to become a trillion-dollar market by 2032, but the value of the intellectual property created by those services—all the drug and technology patents; all the images, films, stories, virtual personalities—will eclipse that sum. It is possible that the products of AI may, within my lifetime, come to represent a substantial portion of all the world’s financial value.

In this light, the question of ownership takes on its true scale, revealing itself as a version of Bertolt Brecht’s famous query: To whom does the world belong?


Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems.

So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”)…(More)”.

Collaborative Intelligence


Book edited by Mira Lane and Arathi Sethumadhavan: “…The book delves deeply into the dynamic interplay between theory and practice, shedding light on the transformative potential and complexities of AI. For practitioners deeply immersed in the world of AI, Lane and Sethumadhavan offer firsthand accounts and insights from technologists, academics, and thought leaders, as well as a series of compelling case studies, ranging from AI’s impact on artistry to its role in addressing societal challenges like modern slavery and wildlife conservation.

As the global AI market burgeons, this book enables collaboration, knowledge sharing, and interdisciplinary dialogue. It caters not only to the practitioners shaping the AI landscape but also to policymakers striving to navigate the intricate relationship between humans and machines, as well as academics. Divided into two parts, the first half of the book offers readers a comprehensive understanding of AI’s historical context, its influence on power dynamics, human-AI interaction, and the critical role of audits in governing AI systems. The second half unfolds a series of eight case studies, unraveling AI’s impact on fields as varied as healthcare, vehicular safety, conservation, human rights, and the metaverse. Each chapter in this book paints a vivid picture of AI’s triumphs and challenges, providing a panoramic view of how it is reshaping our world…(More)”

Trust but Verify: A Guide to Conducting Due Diligence When Leveraging Non-Traditional Data in the Public Interest


New Report by Sara Marcucci, Andrew J. Zahuranec, and Stefaan Verhulst: “In an increasingly data-driven world, organizations across sectors are recognizing the potential of non-traditional data—data generated from sources outside conventional databases, such as social media, satellite imagery, and mobile usage—to provide insights into societal trends and challenges. When harnessed thoughtfully, this data can improve decision-making and bolster public interest projects in areas as varied as disaster response, healthcare, and environmental protection. However, with these new data streams come heightened ethical, legal, and operational risks that organizations need to manage responsibly. That’s where due diligence comes in, helping to ensure that data initiatives are beneficial and ethical.

The report, Trust but Verify: A Guide to Conducting Due Diligence When Leveraging Non-Traditional Data in the Public Interest, co-authored by Sara Marcucci, Andrew J. Zahuranec, and Stefaan Verhulst, offers a comprehensive framework to guide organizations in responsible data partnerships. Whether you’re a public agency or a private enterprise, this report provides a six-step process to ensure due diligence and maintain accountability, integrity, and trust in data initiatives…(More) (Blog)”.

Global Trends in Government Innovation 2024


OECD Report: “Governments worldwide are transforming public services through innovative approaches that place people at the center of design and delivery. This report analyses nearly 800 case studies from 83 countries and identifies five critical trends in government innovation that are reshaping public services. First, governments are working with users and stakeholders to co-design solutions and anticipate future needs to create flexible, responsive, resilient and sustainable public services. Second, governments are investing in scalable digital infrastructure, experimenting with emergent technologies (such as automation, AI and modular code), and expanding innovative and digital skills to make public services more efficient. Third, governments are making public services more personalised and proactive to better meet people’s needs and expectations and reduce psychological costs and administrative frictions, ensuring they are more accessible, inclusive and empowering, especially for persons and groups in vulnerable and disadvantaged circumstances. Fourth, governments are drawing on traditional and non-traditional data sources to guide public service design and execution. They are also increasingly using experimentation to navigate highly complex and unpredictable environments. Finally, governments are reframing public services as opportunities and channels for citizens to exercise their civic engagement and hold governments accountable for upholding democratic values such as openness and inclusion…(More)”.

Direct democracy in the digital age: opportunities, challenges, and new approaches


Article by Pattharapong Rattanasevee, Yared Akarapattananukul & Yodsapon Chirawut: “This article delves into the evolving landscape of direct democracy, particularly in the context of the digital era, where ICT and digital platforms play a pivotal role in shaping democratic engagement. Through a comprehensive analysis of empirical data and theoretical frameworks, it evaluates the advantages and inherent challenges of direct democracy, such as majority tyranny, short-term focus, polarization, and the spread of misinformation. It proposes the concept of Liquid democracy as a promising hybrid model that combines direct and representative elements, allowing for voting rights delegation to trusted entities, thereby potentially mitigating some of the traditional drawbacks of direct democracy. Furthermore, the article underscores the necessity for legal regulations and constitutional safeguards to protect fundamental rights and ensure long-term sustainability within a direct democracy framework. This research contributes to the ongoing discourse on democratic innovation and highlights the need for a balanced approach to integrating digital tools with democratic processes…(More)”.

Announcing SPARROW: A Breakthrough AI Tool to Measure and Protect Earth’s Biodiversity in the Most Remote Places


Blog by Juan Lavista Ferres: “The biodiversity of our planet is rapidly declining. We’ve likely reached a tipping point where it is crucial to use every tool at our disposal to help preserve what remains. That’s why I am pleased to announce SPARROW—Solar-Powered Acoustic and Remote Recording Observation Watch, developed by Microsoft’s AI for Good Lab. SPARROW is an AI-powered edge computing solution designed to operate autonomously in the most remote corners of the planet. Solar-powered and equipped with advanced sensors, it collects biodiversity data—from camera traps, acoustic monitors, and other environmental detectors—that are processed using our most advanced PyTorch-based wildlife AI models on low-energy edge GPUs. The resulting critical information is then transmitted via low-Earth orbit satellites directly to the cloud, allowing researchers to access fresh, actionable insights in real time, no matter where they are. 

Think of SPARROW as a network of Earth-bound satellites, quietly observing and reporting on the health of our ecosystems without disrupting them. By leveraging solar energy, these devices can run for a long time, minimizing their footprint and any potential harm to the environment…(More)”.

A linkless internet


Essay by Collin Jennings: “..But now Google and other websites are moving away from relying on links in favour of artificial intelligence chatbots. Considered as preserved trails of connected ideas, links make sense as early victims of the AI revolution since large language models (LLMs) such as ChatGPT, Google’s Gemini and others abstract the information represented online and present it in source-less summaries. We are at a moment in the history of the web in which the link itself – the countless connections made by website creators, the endless tapestry of ideas woven together throughout the web – is in danger of going extinct. So it’s pertinent to ask: how did links come to represent information in the first place? And what’s at stake in the movement away from links toward AI chat interfaces?

To answer these questions, we need to go back to the 17th century, when writers and philosophers developed the theory of mind that ultimately inspired early hypertext plans. In this era, prominent philosophers, including Thomas Hobbes and John Locke, debated the extent to which a person controls the succession of ideas that appears in her mind. They posited that the succession of ideas reflects the interaction between the data received from the senses and one’s mental faculties – reason and imagination. Subsequently, David Hume argued that all successive ideas are linked by association. He enumerated three kinds of associative connections among ideas: resemblance, contiguity, and cause and effect. In An Enquiry Concerning Human Understanding (1748), Hume offers examples of each relationship:

A picture naturally leads our thoughts to the original: the mention of one apartment in a building naturally introduces an enquiry or discourse concerning the others: and if we think of a wound, we can scarcely forbear reflecting on the pain which follows it.

The mind follows connections found in the world. Locke and Hume believed that all human knowledge comes from experience, and so they had to explain how the mind receives, processes and stores external data. They often reached for media metaphors to describe the relationship between the mind and the world. Locke compared the mind to a blank tablet, a cabinet and a camera obscura. Hume relied on the language of printing to distinguish between the vivacity of impressions imprinted upon one’s senses and the ideas recalled in the mind…(More)”.

Harnessing AI: How to develop and integrate automated prediction systems for humanitarian anticipatory action


CEPR Report: “Despite unprecedented access to data, resources, and wealth, the world faces an escalating wave of humanitarian crises. Armed conflict, climate-induced disasters, and political instability are displacing millions and devastating communities. Nearly one in every five children are living in or fleeing conflict zones (OCHA, 2024). Often the impacts of conflict and climatic hazards – such as droughts and flood – exacerbate each other, leading to even greater suffering. As crises unfold and escalate, the need for timely and effective humanitarian action becomes paramount.

Sophisticated systems for forecasting and monitoring natural and man-made hazards have emerged as critical tools to help inform and prompt action. The full potential for the use of such automated forecasting systems to inform anticipatory action (AA) is immense but is still to be realised. By providing early warnings and predictive insights, these systems could help organisations allocate resources more efficiently, plan interventions more effectively, and ultimately save lives and prevent or reduce humanitarian impact.


This Policy Insight provides an account of the significant technical, ethical, and organisational difficulties involved in such systems, and the current solutions in place…(More)”.