Article by Sean Hill: “Imagine a future where scientific discovery is unbound by the limitations of data accessibility and interoperability. In this future, researchers across all disciplines — from biology and chemistry to astronomy and social sciences — can seamlessly access, integrate, and analyze vast datasets with the assistance of advanced artificial intelligence (AI). This world is one where AI-ready data empowers scientists to unravel complex problems at unprecedented speeds, leading to breakthroughs in medicine, environmental conservation, technology, and more. The vision of a truly FAIR (Findable, Accessible, Interoperable, Reusable) and AI-ready data ecosystem, underpinned by Responsible AI (RAI) practices and the pivotal role of data stewards, promises to revolutionize the way science is conducted, fostering an era of rapid innovation and global collaboration…(More)”.
The Economy of Algorithms
Book by Marek Kowalkiewicz: “Welcome to the economy of algorithms. It’s here and it’s growing. In the past few years, we have been flooded with examples of impressive technology. Algorithms have been around for hundreds of years, but they have only recently begun to ‘escape’ our understanding. When algorithms perform certain tasks, they’re not just as good as us, they’re becoming infinitely better, and, at the same time, massively more surprising. We are so impressed by what they can do that we give them a lot of agency. But because they are so hard to comprehend, this leads to all kinds of unintended consequences.
In the 20th century, things were simple: we had the economy of corporations. In the first two decades of the 21st century, we saw the emergence of the economy of people, otherwise known as the digital economy, enabled by the internet. Now we’re seeing a new economy take shape: the economy of algorithms…(More)”.
UN adopts Chinese resolution with US support on closing the gap in access to artificial intelligence
Article by Edith Lederer: “The U.N. General Assembly adopted a Chinese-sponsored resolution with U.S. support urging wealthy developed nations to close the widening gap with poorer developing countries and ensure that they have equal opportunities to use and benefit from artificial intelligence.
The resolution approved Monday follows the March 21 adoption of the first U.N. resolution on artificial intelligence spearheaded by the United States and co-sponsored by 123 countries including China. It gave global support to the international effort to ensure that AI is “safe, secure and trustworthy” and that all nations can take advantage of it.
Adoption of the two nonbinding resolutions shows that the United States and China, rivals in many areas, are both determined to be key players in shaping the future of the powerful new technology — and have been cooperating on the first important international steps.
The adoption of both resolutions by consensus by the 193-member General Assembly shows widespread global support for their leadership on the issue.
Fu Cong, China’s U.N. ambassador, told reporters Monday that the two resolutions are complementary, with the U.S. measure being “more general” and the just-adopted one focusing on “capacity building.”
He called the Chinese resolution, which had more than 140 sponsors, “great and far-reaching,” and said, “We’re very appreciative of the positive role that the U.S. has played in this whole process.”
Nate Evans, spokesperson for the U.S. mission to the United Nations, said Tuesday that the Chinese-sponsored resolution “was negotiated so it would further the vision and approach the U.S. set out in March.”
“We worked diligently and in good faith with developing and developed countries to strengthen the text, ensuring it reaffirms safe, secure, and trustworthy AI that respects human rights, commits to digital inclusion, and advances sustainable development,” Evans said.
Fu said that AI technology is advancing extremely fast and the issue has been discussed at very senior levels, including by the U.S. and Chinese leaders.
“We do look forward to intensifying our cooperation with the United States and for that matter with all countries in the world on this issue, which … will have far-reaching implications in all dimensions,” he said…(More)”.
Not all ‘open source’ AI models are actually open: here’s a ranking
Article by Elizabeth Gibney: “Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models.
The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable. What counts as open source is likely to take on increased importance when the European Union’s Artificial Intelligence Act comes into force. The legislation will apply less strict regulations to models that are classed as open.
Some big firms are reaping the benefits of claiming to have open-source models, while trying “to get away with disclosing as little as possible”, says Mark Dingemanse, a language scientist at Radboud University in Nijmegen, the Netherlands. This practice is known as open-washing.
“To our surprise, it was the small players, with relatively few resources, that go the extra mile,” says Dingemanse, who together with his colleague Andreas Liesenfeld, a computational linguist, created a league table that identifies the most and least open models (see table). They published their findings on 5 June in the conference proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency…(More)”.
Artificial Intelligence Is Making The Housing Crisis Worse
Article by Rebecca Burns: “When Chris Robinson applied to move into a California senior living community five years ago, the property manager ran his name through an automated screening program that reportedly used artificial intelligence to detect “higher-risk renters.” Robinson, then 75, was denied after the program assigned him a low score — one that he later learned was based on a past conviction for littering.
Not only did the crime have little bearing on whether Robinson would be a good tenant, it wasn’t even one that he’d committed. The program had turned up the case of a 33-year-old man with the same name in Texas — where Robinson had never lived. He eventually corrected the error but lost the apartment and his application fee nonetheless, according to a federal class-action lawsuit that moved towards settlement this month. The credit bureau TransUnion, one of the largest actors in the multi-billion-dollar tenant screening industry, agreed to pay $11.5 million to resolve claims that its programs violated fair credit reporting laws.
Landlords are increasingly turning to private equity-backed artificial intelligence (AI) screening programs to help them select tenants, and resulting cases like Robinson’s are just the tip of the iceberg. The prevalence of incorrect, outdated, or misleading information in such reports is increasing costs and barriers to housing, according to a recent report from federal consumer regulators.
Even when screening programs turn up real data, housing and privacy advocates warn that opaque algorithms are enshrining high-tech discrimination in an already unequal housing market — the latest example of how AI can end up amplifying existing biases…(More)”.
What the Arrival of A.I. Phones and Computers Means for Our Data
Article by Brian X. Chen: “Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.
But to make that work, these companies need something from you: more data.
In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.
Is this information you are willing to share?
This change has significant implications for our privacy. To provide the new bespoke services, the companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled up files and photos on phones and computers was relatively siloed. A.I. needs an overview to connect the dots between what we do across apps, websites and communications, security experts say.
“Do I feel safe giving this information to this company?” Cliff Steinhauer, a director at the National Cybersecurity Alliance, a nonprofit focusing on cybersecurity, said about the companies’ A.I. strategies.
All of this is happening because OpenAI’s ChatGPT upended the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term of A.I. They are convinced this new type of computing interface — one that is constantly studying what you are doing to offer assistance — will become indispensable.
The biggest potential security risk with this change stems from a subtle shift happening in the way our new devices work, experts say. Because A.I. can automate complex actions — like scrubbing unwanted objects from a photo — it sometimes requires more computational power than our phones can handle. That means more of our personal data may have to leave our phones to be dealt with elsewhere.
The information is being transmitted to the so-called cloud, a network of servers that are processing the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once for our eyes only — photos, messages and emails — now may be connected and analyzed by a company on its servers…(More)”.
Connecting the dots: AI is eating the web that enabled it
Article by Tom Wheeler: “The large language models (LLMs) of generative AI that scraped their training data from websites are now using that data to eliminate the need to go to many of those same websites. Respected digital commentator Casey Newton concluded, “the web is entering a state of managed decline.” The Washington Post headline was more dire: “Web publishers brace for carnage as Google adds AI answers.”…
Created by Sir Tim Berners-Lee in 1989, the World Wide Web redefined the nature of the internet into a user-friendly linkage of diverse information repositories. “The first decade of the web…was decentralized with a long-tail of content and options,” Berners-Lee wrote this year on the occasion of its 35th anniversary. Over the intervening decades, that vision of distributed sources of information has faced multiple challenges. The dilution of decentralization began with powerful centralized hubs such as Facebook and Google that directed user traffic. Now comes the ultimate disintegration of Berners-Lee’s vision as generative AI reduces traffic to websites by recasting their information.
The web’s open access to the world’s information trained the large language models (LLMs) of generative AI. Now, those generative AI models are coming for their progenitor.
The web allowed users to discover diverse sources of information from which to draw conclusions. AI cuts out the intellectual middleman to go directly to conclusions from a centralized source.
The AI paradigm of cutting out the middleman appears to have been further advanced in Apple’s recent announcement that it will incorporate OpenAI to enable its Siri app to provide ChatGPT-like answers. With this new deal, Apple becomes an AI-based disintermediator, not only eliminating the need to go to websites, but also potentially disintermediating the need for the Google search engine for which Apple has been paying $20 billion annually.
The Atlantic, University of Toronto, and Gartner studies suggest the Pew research on website mortality could be just the beginning. Generative AI’s ability to deliver conclusions cannibalizes traffic to individual websites threatening the raison d’être of all websites, especially those that are commercially supported…(More)”
Using AI to Inform Policymaking
Paper for the AI4Democracy series at The Center for the Governance of Change at IE University: “Good policymaking requires a multifaceted approach, incorporating diverse tools and processes to address the varied needs and expectations of constituents. The paper by Turan and McKenzie focuses on an LLM-based tool, “Talk to the City” (TttC), developed to facilitate collective decision-making by soliciting, analyzing, and organizing public opinion. This tool has been tested in three distinct applications:
1. Finding Shared Principles within Constituencies: Through large-scale citizen consultations, TttC helps identify common values and priorities.
2. Compiling Shared Experiences in Community Organizing: The tool aggregates and synthesizes the experiences of community members, providing a cohesive overview.
3. Action-Oriented Decision Making in Decentralized Governance: TttC supports decision-making processes in decentralized governance structures by providing actionable insights from diverse inputs.
CAPABILITIES AND BENEFITS OF LLM TOOLS
LLMs, when applied to democratic decision-making, offer significant advantages:
- Processing Large Volumes of Qualitative Inputs: LLMs can handle extensive qualitative data, summarizing discussions and identifying overarching themes with high accuracy.
- Producing Aggregate Descriptions in Natural Language: The ability to generate clear, comprehensible summaries from complex data makes these tools invaluable for communicating nuanced topics.
- Facilitating Understanding of Constituents’ Needs: By organizing public input, LLM tools help leaders gain a better understanding of their constituents’ needs and priorities.
CASE STUDIES AND TOOL EFFICACY
The paper presents case studies using TttC, demonstrating its effectiveness in improving collective deliberation and decision-making. Key functionalities include:
- Aggregating Responses and Clustering Ideas: TttC identifies common themes and divergences within a population’s opinions.
- Interactive Interface for Exploration: The tool provides an interactive platform for exploring the diversity of opinions at both individual and group scales, revealing complexity, common ground, and polarization…(More)”
The use of AI for improving energy security
Rand Report: “Electricity systems around the world are under pressure due to aging infrastructure, rising demand for electricity and the need to decarbonise energy supplies at pace. Artificial intelligence (AI) applications have potential to help address these pressures and increase overall energy security. For example, AI applications can reduce peak demand through demand response, improve the efficiency of wind farms and facilitate the integration of large numbers of electric vehicles into the power grid. However, the widespread deployment of AI applications could also come with heightened cybersecurity risks, the risk of unexplained or unexpected actions, or supplier dependency and vendor lock-in. The speed at which AI is developing means many of these opportunities and risks are not yet well understood.
The aim of this study was to provide insight into the state of AI applications for the power grid and the associated risks and opportunities. Researchers conducted a focused scan of the scientific literature to find examples of relevant AI applications in the United States, the European Union, China and the United Kingdom…(More)”.
Can Artificial Intelligence Bring Deliberation to the Masses?
Chapter by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation, on the one hand, and mass participation, on the other. Might artificial intelligence help bring quality deliberation to the masses? The answer is a qualified yes. The chapter first examines the conundrum in deliberative democracy around the trade-off between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a two-month-long structured exercise of collective deliberation. Building on the shortcomings of this process, the chapter then considers two different visions for an algorithm-powered form of mass deliberation—Mass Online Deliberation (MOD), on the one hand, and Many Rotating Mini-publics (MRMs), on the other—theorizing various ways artificial intelligence could play a role in them. To the extent that artificial intelligence makes the possibility of either vision more likely to come to fruition, it carries with it the promise of deliberation at the very large scale….(More)”