Foundational Research Gaps and Future Directions for Digital Twins


Report by the National Academy of Engineering; National Academies of Sciences, Engineering, and Medicine: “Across multiple domains of science, engineering, and medicine, excitement is growing about the potential of digital twins to transform scientific research, industrial practices, and many aspects of daily life. A digital twin couples computational models with a physical counterpart to create a system that is dynamically updated through bidirectional data flows as conditions change. Going beyond traditional simulation and modeling, digital twins could enable improved medical decision-making at the individual patient level, predictions of future weather and climate conditions over longer timescales, and safer, more efficient engineering processes. However, many challenges remain before these applications can be realized.

This report identifies the foundational research and resources needed to support the development of digital twin technologies. The report presents critical future research priorities and an interdisciplinary research agenda for the field, including how federal agencies and researchers across domains can best collaborate…(More)”.

Considerations for Governing Open Foundation Models


Brief by Rishi Bommasani et al: “Foundation models (e.g., GPT-4, Llama 2) are at the epicenter of AI, driving technological innovation and billions in investment. This paradigm shift has sparked widespread demands for regulation. Animated by factors as diverse as declining transparency and unsafe labor practices, limited protections for copyright and creative work, as well as market concentration and productivity gains, many have called for policymakers to take action.

Central to the debate about how to regulate foundation models is the process by which foundation models are released. Some foundation models like Google DeepMind’s Flamingo are fully closed, meaning they are available only to the model developer; others, such as OpenAI’s GPT-4, are limited access, available to the public but only as a black box; and still others, such as Meta’s Llama 2, are more open, with widely available model weights enabling downstream modification and scrutiny. As of August 2023, the U.K.’s Competition and Markets Authority documents the most common release approach for publicly-disclosed models is open release based on data from Stanford’s Ecosystem Graphs. Developers like Meta, Stability AI, Hugging Face, Mistral, Together AI, and EleutherAI frequently release models openly.

Governments around the world are issuing policy related to foundation models. As part of these efforts, open foundation models have garnered significant attention: The recent U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence tasks the National Telecommunications and Information Administration with preparing a report on open foundation models for the president. In the EU, open foundation models trained with fewer than 1025 floating point operations (a measure of the amount of compute expended) appear to be exempted under the recently negotiated AI Act. The U.K.’s AI Safety Institute will “consider open-source systems as well as those deployed with various forms of access controls” as part of its initial priorities. Beyond governments, the Partnership on AI has introduced guidelines for the safe deployment of foundation models, recommending against open release for the most capable foundation models.

Policy on foundation models should support the open foundation model ecosystem, while providing resources to monitor risks and create safeguards to address harms. Open foundation models provide significant benefits to society by promoting competition, accelerating innovation, and distributing power. For example, small businesses hoping to build generative AI applications could choose among a variety of open foundation models that offer different capabilities and are often less expensive than closed alternatives. Further, open models are marked by greater transparency and, thereby, accountability. When a model is released with its training data, independent third parties can better assess the model’s capabilities and risks…(More)”.

Observer Theory


Article by Stephen Wolfram: “We call it perception. We call it measurement. We call it analysis. But in the end it’s about how we take the world as it is, and derive from it the impression of it that we have in our minds.

We might have thought that we could do science “purely objectively” without any reference to observers or their nature. But what we’ve discovered particularly dramatically in our Physics Project is that the nature of us as observers is critical even in determining the most fundamental laws we attribute to the universe.

But what ultimately does an observer—say like us—do? And how can we make a theoretical framework for it? Much as we have a general model for the process of computation—instantiated by something like a Turing machine—we’d like to have a general model for the process of observation: a general “observer theory”.

Central to what we think of as an observer is the notion that the observer will take the raw complexity of the world and extract from it some reduced representation suitable for a finite mind. There might be zillions of photons impinging on our eyes, but all we extract is the arrangement of objects in a visual scene. Or there might be zillions of gas molecules impinging on a piston, yet all we extract is the overall pressure of the gas.

In the end, we can think of it fundamentally as being about equivalencing. There are immense numbers of different individual configurations for the photons or the gas molecules—that are all treated as equivalent by an observer who’s just picking out the particular features needed for some reduced representation.

There’s in a sense a certain duality between computation and observation. In computation one’s generating new states of a system. In observation, one’s equivalencing together different states.

That equivalencing must in the end be implemented “underneath” by computation. But in observer theory what we want to do is just characterize the equivalencing that’s achieved. For us as observers it might in practice be all about how our senses work, what our biological or cultural nature is—or what technological devices or structures we’ve built. But what makes a coherent concept of observer theory possible is that there seem to be general, abstract characterizations that capture the essence of different kinds of observers…(More)”.

 Privacy-Enhancing and Privacy-Preserving Technologies: Understanding the Role of PETs and PPTs in the Digital Age


Paper by the Centre for Information Policy Leadership: “…explores how organizations are approaching privacy-enhancing technologies (“PETs”) and how PETs can advance data protection principles, and provides examples of how specific types of PETs work. It also explores potential challenges to the use of PETs and possible solutions to those challenges.

CIPL emphasizes the enormous potential inherent in these technologies to mitigate privacy risks and support innovation, and recommends a number of steps to foster further development and adoption of PETs. In particular, CIPL calls for policymakers and regulators to incentivize the use of PETs through clearer guidance on key legal concepts that impact the use of PETs, and by adopting a pragmatic approach to the application of these concepts.

CIPL’s recommendations towards wider adoption are as follows:

  • Issue regulatory guidance and incentives regarding PETs: Official regulatory guidance addressing PETs in the context of specific legal obligations or concepts (such as anonymization) will incentivize greater investment in PETs.
  • Increase education and awareness about PETs: PET developers and providers need to show tangible evidence of the value of PETs and help policymakers, regulators and organizations understand how such technologies can facilitate responsible data use.
  • Develop industry standards for PETs: Industry standards would help facilitate interoperability for the use of PETs across jurisdictions and help codify best practices to support technical reliability to foster trust in these technologies.
  • Recognize PETs as a demonstrable element of accountability: PETs complement robust data privacy management programs and should be recognized as an element of organizational accountability…(More)”.

WikiCrow: Automating Synthesis of Human Scientific Knowledge


About: “As scientists, we stand on the shoulders of giants. Scientific progress requires curation and synthesis of prior knowledge and experimental results. However, the scientific literature is so expansive that synthesis, the comprehensive combination of ideas and results, is a bottleneck. The ability of large language models to comprehend and summarize natural language will  transform science by automating the synthesis of scientific knowledge at scale. Yet current LLMs are limited by hallucinations, lack access to the most up-to-date information, and do not provide reliable references for statements.

Here, we present WikiCrow, an automated system that can synthesize cited Wikipedia-style summaries for technical topics from the scientific literature. WikiCrow is built on top of Future House’s internal LLM agent platform, PaperQA, which in our testing, achieves state-of-the-art (SOTA) performance on a retrieval-focused version of PubMedQA and other benchmarks, including a new retrieval-first benchmark, LitQA, developed internally to evaluate systems retrieving full-text PDFs across the entire scientific literature.

As a demonstration of the potential for AI to impact scientific practice, we use WikiCrow to generate draft articles for the 15,616 human protein-coding genes that currently lack Wikipedia articles, or that have article stubs. WikiCrow creates articles in 8 minutes, is much more consistent than human editors at citing its sources, and makes incorrect inferences or statements about 9% of the time, a number that we expect to improve as we mature our systems. WikiCrow will be a foundational tool for the AI Scientists we plan to build in the coming years, and will help us to democratize access to scientific research…(More)”.

A Manifesto on Enforcing Law in the Age of ‘Artificial Intelligence’


Manifesto by the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’: “… calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, we recognise the important and complementary role of standards and compliance practices. Whereas the first manifesto focused on the relationship between democratic law-making and technology, this second manifesto shifts focus from the design of law in the age of AI to the enforcement of law. Concretely, we offer 10 recommendations for addressing the key enforcement challenges shared across transatlantic stakeholders. We call on those who support these recommendations to sign this manifesto…(More)”.

Using AI to support people with disability in the labour market


OECD Report: “People with disability face persisting difficulties in the labour market. There are concerns that AI, if managed poorly, could further exacerbate these challenges. Yet, AI also has the potential to create more inclusive and accommodating environments and might help remove some of the barriers faced by people with disability in the labour market. Building on interviews with more than 70 stakeholders, this report explores the potential of AI to foster employment for people with disability, accounting for both the transformative possibilities of AI-powered solutions and the risks attached to the increased use of AI for people with disability. It also identifies obstacles hindering the use of AI and discusses what governments could do to avoid the risks and seize the opportunities of using AI to support people with disability in the labour market…(More)”.

Making democratic innovations stick


Report by NESTA: “A survey of 52 people working on participation in local government in the UK and the Nordic countries found that:

  • a lack of funding and bureaucracy are the biggest barriers to using and scaling democratic innovations
  • enabling citizens to influence decision making, building trust and being more inclusive are the most important reasons for using democratic innovations
  • tackling climate change and reducing poverty and inequality are seen as the most important challenges to involve the public in.

When we focused on attitudes towards participation in the UK more broadly, and on attitudes to participation in climate change more specifically we found that:

  • the public think it is important that they are being involved in how we make decisions on climate change. 71% of the public think it is important they are given a say in how to reduce the UK’s carbon emissions and transition to net zero
  • the public doesn’t think the government is doing a good job of involving them – only 12% thought that the government is doing a good job of involving them in making decisions on how we tackle climate change
  • not having the ability to influence decision makers and not having the right skills to participate are seen as the biggest barriers by the public….(More)”.

Policy primer on non-personal data 


Primer by the International Chamber of Commerce: “Non-personal data plays a critical role in providing solutions to global challenges. Unlocking its full potential requires policymakers, businesses, and all other stakeholders to collaborate to construct policy environments that can capitalise on its benefits.  

This report gives insights into the different ways that non-personal data has a positive impact on society, with benefits including, but not limited to: 

  1. Tracking disease outbreaks; 
  2. Facilitating international scientific cooperation; 
  3. Understanding climate-related trends; 
  4.  Improving agricultural practices for increased efficiency; 
  5. Optimising energy consumption; 
  6. Developing evidence-based policy; 
  7. Enhancing cross-border cybersecurity cooperation. 

In addition, businesses of all sizes benefit from the transfer of data across borders, allowing companies to establish and maintain international supply chains and smaller businesses to enter new markets or reduce operating costs. 

Despite these benefits, international flows of non-personal data are frequently limited by restrictions and data localisation measures. A growing patchwork of regulations can also create barriers to realising the potential of non-personal data. This report explores the impact of data flow restrictions including: 

  • Hindering global supply chains; 
  • Limiting the use of AI reliant on large datasets; 
  • Disincentivising data sharing amongst companies; 
  • Preventing companies from analysing the data they hold…(More)”.

GovTech in Fragile and Conflict Situations Trends, Challenges, and Opportunities


Report by the World Bank: “This report takes stock of the development of GovTech solutions in Fragile and Conflict-Affected Situations (FCS), be they characterized by low institutional capacity and/or by active conflict and provides insights on challenges and opportunities for implementing GovTech reforms in such contexts. It is aimed at practitioners and policy makers working in FCS but will also be useful for practitioners working in Fragility, Conflict, and Violence (FCV) contexts, at-risk countries, or low-income countries as some similar challenges and opportunities can be present…(More)”.