OECD Report: “Artificial intelligence (AI) is transforming economies and societies, but its full potential is hindered by poor access to quality data and models. Based on comprehensive country examples, the OECD report “Enhancing Access to and Sharing of Data in the Age of AI” highlights how governments can enhance access to and sharing of data and certain AI models, while ensuring privacy and other rights and interests such as intellectual property rights. It highlights the OECD Recommendation on Enhancing Access to and Sharing of Data, which provides principles to balance openness while ensuring effective legal, technical and organisational safeguards. This policy brief highlights the key findings of the report and their relevance for stakeholders seeking to promote trustworthy AI through better policies for data and AI models that drive trust, investment, innovation, and well-being….(More)”
Tech tycoons have got the economics of AI wrong
The Economist: “…The Jevons paradox—the idea that efficiency leads to more use of a resource, not less—has in recent days provided comfort to Silicon Valley titans worried about the impact of DeepSeek, the maker of a cheap and efficient Chinese chatbot, which threatens the more powerful but energy-guzzling American varieties. Satya Nadella, the boss of Microsoft, posted on X, a social-media platform, that “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of,” along with a link to the Wikipedia page for the economic principle. Under this logic, DeepSeek’s progress will mean more demand for data centres, Nvidia chips and even the nuclear reactors that the hyperscalers were, prior to the unveiling of DeepSeek, paying to restart. Nothing to worry about if the price falls, Microsoft can make it up on volume.
The logic, however self-serving, has a ring of truth to it. Jevons’s paradox is real and observable in a range of other markets. Consider the example of lighting. William Nordhaus, a Nobel-prizewinning economist, has calculated that a Babylonian oil lamp, powered by sesame oil, produced about 0.06 lumens of light per watt of energy. That compares with up to 110 lumens for a modern light-emitting diode. The world has not responded to this dramatic improvement in energy efficiency by enjoying the same amount of light as a Babylonian at lower cost. Instead, it has banished darkness completely, whether through more bedroom lamps than could have been imagined in ancient Mesopotamia or the Las Vegas sphere, which provides passersby with the chance to see a 112-metre-tall incandescent emoji. Urban light is now so cheap and so abundant that many consider it to be a pollutant.
Likewise, more efficient chatbots could mean that AI finds new uses (some no doubt similarly obnoxious). The ability of DeepSeek’s model to perform about as well as more compute-hungry American AI shows that data centres are more productive than previously thought, rather than less. Expect, the logic goes, more investment in data centres and so on than you did before.
Although this idea should provide tech tycoons with some solace, they still ought to worry. The Jevons paradox is a form of a broader phenomenon known as “rebound effects”. These are typically not large enough to fully offset savings from improved efficiency….Basing the bull case for AI on the Jevons paradox is, therefore, a bet not on the efficiency of the technology but on the level of demand. If adoption is being held back by price then efficiency gains will indeed lead to greater use. If technological progress raises expectations rather than reduces costs, as is typical in health care, then chatbots will make up an ever larger proportion of spending. At the moment, that looks unlikely. America’s Census Bureau finds that only 5% of American firms currently use AI and 7% have plans to adopt it in the future. Many others find the tech difficult to use or irrelevant to their line of business…(More)”.
Unlocking AI’s potential for the public sector
Article by Ruth Kelly: “…Government needs to work on its digital foundations. The extent of legacy IT systems across government is huge. Many were designed and built for a previous business era, and still rely on paper-based processes. Historic neglect and a lack of asset maintenance has added to the difficulty. Because many systems are not compatible, sharing data across systems requires manual extraction which is risky and costly. All this adds to problems with data quality. Government suffers from data which is incomplete, inconsistent, inaccessible, difficult to process and not easily shareable. A lack of common data models, both between and within government departments, makes it difficult and costly to combine different sources of data, and significant manual effort is required to make data usable. Some departments have told us that they spend 60% to 80% of their time on cleaning data when carrying out analysis.
Why is this an issue for AI? Large volumes of good-quality data are important for training, testing and deploying AI models. Poor data leads to poor outcomes, especially where it involves personal data. Access to good-quality data was identified as a barrier to implementing AI by 62% of the 87 government bodies responding to our survey. Simple productivity improvements that provide integration with routine administration (for example summarising documents) is already possible, but integration with big, established legacy IT is a whole other long-term endeavour. Layering new technology on top of existing systems, and reusing poor-quality and aging data, carries the risk of magnifying problems and further embedding reliance on legacy systems…(More)”
AI Commons: nourishing alternatives to Big Tech monoculture
Report by Joana Varon, Sasha Costanza-Chock, Mariana Tamari, Berhan Taye, and Vanessa Koetz: “‘Artificial Intelligence’ (AI) has become a buzzword all around the globe, with tech companies, research institutions, and governments all vying to define and shape its future. How can we escape the current context of AI development where certain power forces are pushing for models that, ultimately, automate inequalities and threaten socio-enviromental diversities? What if we could redefine AI? What if we could shift its production from a capitalist model to a more disruptive, inclusive, and decentralized one? Can we imagine and foster an AI Commons ecosystem that challenges the current dominant neoliberal logic of an AI arms race? An ecosystem encompassing researchers, developers, and activists who are thinking about AI from decolonial, transfeminist, antiracist, indigenous, decentralized, post-capitalist and/or socio-environmental justice perspectives?
This fieldscan research, commissioned by One Project and conducted by Coding Rights, aims to understand the (possibly) emerging “AI Common” ecosystem. Focused on key entities (organizations, cooperatives and collectives, networks, companies, projects, and others) from Africa, the Americas, and Europe advancing alternative possible AI futures, the authors identify 234 entities that are advancing the AI Commons ecosystem. The report finds powerful communities of practice, groups, and organizations producing nuanced criticism of the Big Tech-driven AI development ecosystem and, most importantly, imagining, developing, and, at times, deploying an alternative AI technology that’s informed and guided by the principles of decoloniality, feminism, antiracist, and post-capitalist AI systems…(More)”.
The Impact of Artificial Intelligence on Societies
Book edited by Christian Montag and Raian Ali: “This book presents a recent framework proposed to understand how attitudes towards artificial intelligence are formed. It describes how the interplay between different variables, such as the modality of AI interaction, the user personality and culture, the type of AI applications (e.g. in the realm of education, medicine, transportation, among others), and the transparency and explainability of AI systems contributes to understand how user’s acceptance or a negative attitude towards AI develops. Gathering chapters from leading researchers with different backgrounds, this book offers a timely snapshot on factors that will be influencing the impact of artificial intelligence on societies…(More)”.
Local Government: Artificial intelligence use cases
Repository by the (UK) Local Government Association: “Building on the findings of our recent AI survey, which highlighted the need for practical examples, this bank showcases the diverse ways local authorities are leveraging AI.
Within this collection, you’ll discover a spectrum of AI adoption, ranging from utilising AI assistants to streamline back-office tasks to pioneering the implementation of bespoke Large Language Models (LLMs). These real-world use cases exemplify the innovative spirit driving advancements in local government service delivery.
Whether your council is at the outset of its AI exploration or seeking to expand its existing capabilities, this bank offers a wealth of valuable insights and best practices to support your organisation’s AI journey…(More)”.
Developing a public-interest training commons of books
Article by Authors Alliance: “…is pleased to announce a new project, supported by the Mellon Foundation, to develop an actionable plan for a public-interest book training commons for artificial intelligence. Northeastern University Library will be supporting this project and helping to coordinate its progress.
Access to books will play an essential role in how artificial intelligence develops. AI’s Large Language Models (LLMs) have a voracious appetite for text, and there are good reasons to think that these data sets should include books and lots of them. Over the last 500 years, human authors have written over 129 million books. These volumes, preserved for future generations in some of our most treasured research libraries, are perhaps the best and most sophisticated reflection of all human thinking. Their high editorial quality, breadth, and diversity of content, as well as the unique way they employ long-form narratives to communicate sophisticated and nuanced arguments and ideas make them ideal training data sources for AI.
These collections and the text embedded in them should be made available under ethical and fair rules as the raw material that will enable the computationally intense analysis needed to inform new AI models, algorithms, and applications imagined by a wide range of organizations and individuals for the benefit of humanity…(More)”
Data Governance Meets the EU AI Act
Article by Axel Schwanke: “..The EU AI Act emphasizes sustainable AI through robust data governance, promoting principles like data minimization, purpose limitation, and data quality to ensure responsible data collection and processing. It mandates measures such as data protection impact assessments and retention policies. Article 10 underscores the importance of effective data management in fostering ethical and sustainable AI development…This article states that high-risk AI systems must be developed using high-quality data sets for training, validation, and testing. These data sets should be managed properly, considering factors like data collection processes, data preparation, potential biases, and data gaps. The data sets should be relevant, representative, error-free, and complete as much as possible. They should also consider the specific context in which the AI system will be used. In some cases, providers may process special categories of personal data to detect and correct biases, but they must follow strict conditions to protect individuals’ rights and freedoms…
However, achieving compliance presents several significant challenges:
- Ensuring Dataset Quality and Relevance: Organizations must establish robust data and AI platforms to prepare and manage datasets that are error-free, representative, and contextually relevant for their intended use cases. This requires rigorous data preparation and validation processes.
- Bias and Contextual Sensitivity: Continuous monitoring for biases in data is critical. Organizations must implement corrective actions to address gaps while ensuring compliance with privacy regulations, especially when processing personal data to detect and reduce bias.
- End-to-End Traceability: A comprehensive data governance framework is essential to track and document data flow from its origin to its final use in AI models. This ensures transparency, accountability, and compliance with regulatory requirements.
- Evolving Data Requirements: Dynamic applications and changing schemas, particularly in industries like real estate, necessitate ongoing updates to data preparation processes to maintain relevance and accuracy.
- Secure Data Processing: Compliance demands strict adherence to secure processing practices for personal data, ensuring privacy and security while enabling bias detection and mitigation.
Example: Real Estate Data
Immowelt’s real estate price map, awarded as the top performer in a 2022 test of real estate price maps, exemplifies the challenges of achieving high-quality datasets. The prepared data powers numerous services and applications, including data analysis, price predictions, personalization, recommendations, and market research…(More)”
Building Safer and Interoperable AI Systems
Essay by Vint Cerf: “While I am no expert on artificial intelligence (AI), I have some experience with the concept of agents. Thirty-five years ago, my colleague, Robert Kahn, and I explored the idea of knowledge robots (“knowbots” for short) in the context of digital libraries. In principle, a knowbot was a mobile piece of code that could move around the Internet, landing at servers, where they could execute tasks on behalf of users. The concept is mostly related to finding information and processing it on behalf of a user. We imagined that the knowbot code would land at a serving “knowbot hotel” where it would be given access to content and computing capability. The knowbots would be able to clone themselves to execute their objectives in parallel and would return to their origins bearing the results of their work. Modest prototypes were built in the pre-Web era.
In today’s world, artificially intelligent agents are now contemplated that can interact with each other and with information sources found on the Internet. For this to work, it’s my conjecture that a syntax and semantics will need to be developed and perhaps standardized to facilitate inter-agent interaction, agreements, and commitments for work to be performed, as well as a means for conveying results in reliable and unambiguous ways. A primary question for all such concepts starts with “What could possibly go wrong?”
In the context of AI applications and agents, work is underway to answer that question. I recently found one answer to that in the MLCommons AI Safety Working Group and its tool, AILuminate. My coarse sense of this is that AILuminate posts a large and widely varying collection of prompts—not unlike the notion of testing software by fuzzing—looking for inappropriate responses. Large language models (LLMs) can be tested and graded (that’s the hard part) on responses to a wide range of prompts. Some kind of overall safety metric might be established to connect one LLM to another. One might imagine query collections oriented toward exposing particular contextual weaknesses in LLMs. If these ideas prove useful, one could even imagine using them in testing services such as those at Underwriters Laboratory, now called UL Solutions. UL Solutions already offers software testing among its many other services.
LLMs as agents seem naturally attractive…(More)”.
Grant Guardian
About: “In the philanthropic sector, limited time and resources can make it challenging to thoroughly assess a nonprofit’s financial stability. Grant Guardian transforms weeks of financial analysis into hours of strategic insight–creating space for deep, meaningful engagement with partners while maintaining high grantmaking standards.
Introducing Grant Guardian
Grant Guardian is an AI-powered financial due diligence tool that streamlines the assessment process for both foundations and nonprofits. Foundations receive sophisticated financial health analyses and risk assessments, while nonprofits can simply submit their existing financial documents without the task of filling out multiple custom forms. This streamlined approach helps both parties focus on what matters most–their shared mission of creating impact.
How Does It Work?
Advanced AI Analyses: Grant Guardian harnesses the power of AI to analyze financial documents like 990s and audits, offering a comprehensive view of a nonprofit’s financial stability. With rapid data extraction and analysis based on modifiable criteria, Grant Guardian bolsters strategic funding with financial insights.
Customized Risk Reports: Grant Guardian’s risk reports and dashboards are customizable, allowing you to tailor metrics specifically to your organization’s funding priorities. This flexibility enables you to present clear, relevant data to stakeholders while maintaining a transparent audit trail for compliance.
Automated Data Extraction: As an enterprise-grade solution, Grant Guardian automates the extraction and analysis of data from financial reports, identifies potential risks, standardizes assessments, and minimizes user error from bias. This standardization is crucial, as nonprofits often vary in the financial documents they provide, making the due diligence process more complex and error-prone for funders…(More)”.