Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers


Article by Scharon Harding: “A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google’s AI Overview.

In May, Google launched AI Overviews in the US, an experimental feature that populates the top of Google Search results with a summarized answer based on an AI model built into Google’s web rankings. When Google first debuted AI Overview, it quickly became apparent that the feature needed work with accuracy and its ability to properly summarize information from online sources. AI Overviews are “built to only show information that is backed up by top web results,” Liz Reid, VP and head of Google Search, wrote in a May blog post. But as my colleague Benj Edwards pointed out at the time, that setup could contribute to inaccurate, misleading, or even dangerous results: “The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage.”

As Edwards alluded to, many have complained about Google Search results’ quality declining in recent years, as SEO spam and, more recently, AI slop float to the top of searches. As a result, people often turn to the Reddit hack to make Google results more helpful. By adding “site:reddit.com” to search results, users can hone their search to more easily find answers from real people. Google seems to understand the value of Reddit and signed an AI training deal with the company that’s reportedly worth $60 million per year…(More)”.

Rediscovering the Pleasures of Pluralism: The Potential of Digitally Mediated Civic Participation


Essay by Lily L. Tsai and Alex Pentland: “Human society developed when most collective decision-making was limited to small, geographically concentrated groups such as tribes or extended family groups. Discussions about community issues could take place among small numbers of people with similar concerns. As coordination across larger distances evolved, the costs of travel required representatives from each clan or smaller group to participate in deliberations and decision-making involving multiple local communities. Divergence in the interests of representatives and their constituents opened up opportunities for corruption and elite capture.

Technologies now enable very large numbers of people to communicate, coordinate, and make collective decisions on the same platform. We have new opportunities for digitally enabled civic participation and direct democracy that scale for both the smallest and largest groups of people. Quantitative experiments, sometimes including tens of millions of individuals, have examined inclusiveness and efficiency in decision-making via digital networks. Their findings suggest that large networks of nonexperts can make practical, productive decisions and engage in collective action under certain (1) conditions. (2) These conditions include shared knowledge among individuals and communities with similar concerns, and information about their recent actions and outcomes…(More)”

South Korea leverages open government data for AI development


Article by Si Ying Thian: “In South Korea, open government data is powering artificial intelligence (AI) innovations in the private sector.

Take the case of TTCare which may be the world’s first mobile application to analyse eye and skin disease symptoms in pets.

AI Hub allows users to search by industry, data format and year (top row), with the data sets made available based on the particular search term “pet” (bottom half of the page). Image: AI Hub, provided by courtesy of Baek

The AI model was trained on about one million pieces of data – half of the data coming from the government-led AI Hub and the rest collected by the firm itself, according to the Korean newspaper Donga.

AI Hub is an integrated platform set up by the government to support the country’s AI infrastructure.

TTCare’s CEO Heo underlined the importance of government-led AI training data in improving the model’s ability to diagnose symptoms. The firm’s training data is currently accessible through AI Hub, and any Korean citizen can download or use it.

Pushing the boundaries of open data

Over the years, South Korea has consistently come up top in the world’s rankings for Open, Useful, and Re-usable data (OURdata) Index.

The government has been pushing the boundaries of what it can do with open data – beyond just making data usable by providing APIs. Application Programming Interfaces, or APIs, make it easier for users to tap on open government data to power their apps and services.

There is now rising interest from public sector agencies to tap on such data to train AI models, said South Korea’s National Information Society Agency (NIA)’s Principal Manager, Dongyub Baek, although this is still at an early stage.

Baek sits in NIA’s open data department, which handles policies, infrastructure such as the National Open Data Portal, as well as impact assessments of the government initiatives…(More)”

Science and technology’s contribution to the UK economy


UK House of Lords Primer: “It is difficult to accurately pinpoint the economic contribution of science and technology to the UK economy. This is because of the way sectors are divided up and reported in financial statistics. 

 For example, in September 2024 the Office for National Statistics (ONS) reported the following gross value added (GVA) figures by industry/sector for 2023:

  • £71bn for IT and other information service activities 
  • £20.6bn for scientific research and development 

This would amount to £91.6bn, forming approximately 3.9% of the total UK GVA of £2,368.7bn for 2023. However, a number of other sectors could also be included in these figures, for example: 

  • the manufacture of computer, certain machinery and electrical components (valued at £38bn in 2023) 
  • telecommunications (valued at £34.5bn) 

If these two sectors were included too, GVA across all four sectors would total £164.1bn, approximately 6.9% of the UK’s 2023 GVA. However, this would likely still exclude relevant contributions that happen to fall within the definitions of different industries. For example, the manufacture of spacecraft and related machinery falls within the same sector as the manufacture of aircraft in the ONS’s data (this sector was valued at £10.8bn for 2023).  

Alternatively, others have made estimates of the economic contribution of more specific sectors connected to science and technology. For example: 

  • Oxford Economics, an economic advisory firm, has estimated that, in 2023, the life sciences sector contributed over £13bn to the UK economy and employed one in every 121 employed people 
  • the government has estimated the value of the digital sector (comprising information technology and digital content and media) at £158.3bn for 2022
  • a 2023 government report estimated the value of the UK’s artificial intelligence (AI) sector at around £3.7bn (in terms of GVA) and that the sector employed around 50,040 people
  • the Energy and Climate Intelligence Unit, a non-profit organisation, reported estimates that the GVA of the UK’s net zero economy (encompassing sectors such as renewables, carbon capture, green and certain manufacturing) was £74bn in 2022/23 and that it supported approximately 765,700 full-time equivalent (FTE) jobs…(More)”.

Navigating Generative AI in Government


Report by the IBM Center for The Business of Government: “Generative AI refers to algorithms that can create realistic content such as images, text, music, and videos by learning from existing data patterns. Generative AI does more than just create content, it also serves as a user-friendly interface for other AI tools, making complex results easy to understand and use. Generative AI transforms analysis and prediction results into personalized formats, improving explainability by converting complicated data into understandable content. As Generative AI evolves, it plays an active role in collaborative processes, functioning as a vital collaborator by offering strengths that complement human abilities.

Generative AI has the potential to revolutionize government agencies by enhancing efficiency, improving decision making, and delivering better services to citizens, while maintaining agility and scalability. However, in order to implement generative AI solutions effectively, government agencies must address key questions—such as what problems AI can solve, data governance frameworks, and scaling strategies, to ensure a thoughtful and effective AI strategy. By exploring generic use cases, agencies can better understand the transformative potential of generative AI and align it with their unique needs and ethical considerations.

This report, which distills perspectives from two expert roundtable of leaders in Australia, presents 11 strategic pathways for integrating generative AI in government. The strategies include ensuring coherent and ethical AI implementation, developing adaptive AI governance models, investing in a robust data infrastructure, and providing comprehensive training for employees. Encouraging innovation and prioritizing public engagement and transparency are also essential to harnessing the full potential of AI…(More)”

When combinations of humans and AI are useful: A systematic review and meta-analysis


Paper by Michelle Vaccaro, Abdullah Almaatouq & Thomas Malone: “Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’ g = −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems…(More)”.

What’s the Value of Privacy?


Brief by New America: “On a day-to-day basis, people make decisions about what information to share and what information to keep to themselves—guided by an inner privacy compass. Privacy is a concept that is both evocative and broad, often possessing different meanings for different people. The term eludes a commonstatic definition, though it is now inextricably linked to technology and a growing sense that individuals do not have control over their personal information. If privacy still, at its core, encompasses “the right to be left alone,” then that right is increasingly difficult to exercise in the modern era. 

The inability to meaningfully choose privacy is not an accident—in fact, it’s often by design. Society runs on data. Whether it is data about people’s personal attributespreferences, or actions, all that data can be linked together, becoming greater than the sum of its parts. If data is now the world’s most valuable resource, then the companies that are making record profits off that data are highly incentivized to keep accessing it and obfuscating the externalities of data sharing. In brief, data use and privacy are “economically significant.” 

And yet, despite the pervasive nature of data collection, much of the public lacks a nuanced understanding of the true costs and benefits of sharing their data—for themselves and for society as a whole. People who have made billions by collecting and re-selling individual user data will continue to claim that it has little value. And yet, there are legitimate reasons why data should be shared—without a clear understanding of an issue, it is impossible to address it…(More)”.

Open government data and self-efficacy: The empirical evidence of micro foundation via survey experiments


Paper by Kuang-Ting Tai, Pallavi Awasthi, and Ivan P. Lee: “Research on the potential impacts of government openness and open government data is not new. However, empirical evidence regarding the micro-level impact, which can validate macro-level theories, has been particularly limited. Grounded in social cognitive theory, this study contributes to the literature by empirically examining how the dissemination of government information in an open data format can influence individuals’ perceptions of self-efficacy, a key predictor of public participation. Based on two rounds of online survey experiments conducted in the U.S., the findings reveal that exposure to open government data is associated with decreased perceived self-efficacy, resulting in lower confidence in participating in public affairs. This result, while contrary to optimistic assumptions, aligns with some other empirical studies and highlights the need to reconsider the format for disseminating government information. The policy implications suggest further calibration of open data applications to target professional and skilled individuals. This study underscores the importance of experiment replication and theory development as key components of future research agendas…(More)”.

The New Artificial Intelligentsia


Essay by Ruha Benjamin: “In the Fall of 2016, I gave a talk at the Institute for Advanced Study in Princeton titled “Are Robots Racist?” Headlines such as “Can Computers Be Racist? The Human-Like Bias of Algorithms,” “Artificial Intelligence’s White Guy Problem,” and “Is an Algorithm Any Less Racist Than a Human?” had captured my attention in the months before. What better venue to discuss the growing concerns about emerging technologies, I thought, than an institution established during the early rise of fascism in Europe, which once housed intellectual giants like J. Robert Oppenheimer and Albert Einstein, and prides itself on “protecting and promoting independent inquiry.”

My initial remarks focused on how emerging technologies reflect and reproduce social inequities, using specific examples of what some termed “algorithmic discrimination” and “machine bias.” A lively discussion ensued. The most memorable exchange was with a mathematician who politely acknowledged the importance of the issues I raised but then assured me that “as AI advances, it will eventually show us how to address these problems.” Struck by his earnest faith in technology as a force for good, I wanted to sputter, “But what about those already being harmed by the deployment of experimental AI in healthcareeducationcriminal justice, and more—are they expected to wait for a mythical future where sentient systems act as sage stewards of humanity?”

Fast-forward almost 10 years, and we are living in the imagination of AI evangelists racing to build artificial general intelligence (AGI), even as they warn of its potential to destroy us. This gospel of love and fear insists on “aligning” AI with human values to rein in these digital deities. OpenAI, the company behind ChatGPT, echoed the sentiment of my IAS colleague: “We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.” They envision a time when, eventually, “our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.” For many, this is not reassuring…(More)”.

G7 Toolkit for Artificial Intelligence in the Public Sector


Toolkit by OECD: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”