A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

How Public Polling Has Changed in the 21st Century


Report by Pew Research: “The 2016 and 2020 presidential elections left many Americans wondering whether polling was broken and what, if anything, pollsters might do about it. A new Pew Research Center study finds that most national pollsters have changed their approach since 2016, and in some cases dramatically. Most (61%) of the pollsters who conducted and publicly released national surveys in both 2016 and 2022 used methods in 2022 that differed from what they used in 2016. The study also finds the use of multiple methods increasing. Last year 17% of national pollsters used at least three different methods to sample or interview people (sometimes in the same survey), up from 2% in 2016….(More)”.

The Need for Climate Data Stewardship: 10 Tensions and Reflections regarding Climate Data Governance


Paper by Stefaan Verhulst: “Datafication — the increase in data generation and advancements in data analysis — offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing new data sources in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates ten core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues…(More)”.

Meta Kills a Crucial Transparency Tool At the Worst Possible Time


Interview by Vittoria Elliott: “Earlier this month, Meta announced that it would be shutting down CrowdTangle, the social media monitoring and transparency tool that has allowed journalists and researchers to track the spread of mis- and disinformation. It will cease to function on August 14, 2024—just months before the US presidential election.

Meta’s move is just the latest example of a tech company rolling back transparency and security measures as the world enters the biggest global election year in history. The company says it is replacing CrowdTangle with a new Content Library API, which will require researchers and nonprofits to apply for access to the company’s data. But the Mozilla Foundation and 140 other civil society organizations protested last week that the new offering lacks much of CrowdTangle’s functionality, asking the company to keep the original tool operating until January 2025.

Meta spokesperson Andy Stone countered in posts on X that the groups’ claims “are just wrong,” saying the new Content Library will contain “more comprehensive data than CrowdTangle” and be made available to nonprofits, academics, and election integrity experts. When asked why commercial newsrooms, like WIRED, are to be excluded from the Content Library, Meta spokesperson Eric Porterfield said,  that it was “built for research purposes.” While journalists might not have direct access he suggested they could use commercial social network analysis tools, or “partner with an academic institution to help answer a research question related to our platforms.”

Brandon Silverman, cofounder and former CEO of CrowdTangle, who continued to work on the tool after Facebook acquired it in 2016, says it’s time to force platforms to open up their data to outsiders. The conversation has been edited for length and clarity…(More)”.

This Chatbot Democratizes Data to Empower India’s Farmers


Article by Abha Malpani Naismith: “…The lack of access to market price information and reliance on intermediaries to sell on their behalf leaves farmers vulnerable to price exploitation and uncertain returns on their investments.

To solve this, Gramhal is building a data cooperative in India where farmers contribute their information to a data ecosystem, which all farmers can leverage for better informed decision-making…

The social enterprise started the project to democratize data first by using the Indian government’s collected data sets from markets and crops across the country. It then built a chatbot (called Bolbhav) and plugged in that data. Soon about 300,000 farmers were accessing this data set via the chatbot on their mobile phones. 

“We spent no money on marketing — this was all just from word of mouth!” Kaleem said. 

gramhal chatbot provides market data for small farmers in India
Gramhal’s Bolbhav chatbot provides farmers with market data so they know how to fairly price their crops. 

However, Gramhal started getting feedback from farmers that the chatbot was giving them prices three days old and what they wanted was real-time, reliable data. “That is when we realized that we need to work with the power of community and think about a societal network framework where every farmer who is selling can contribute to the data and have access to it,” Kaleem explained. “We needed to find a way where the farmer can send price information about what they are selling by uploading their receipts, and we can aggregate that data across markets and share it with them.”

The solution was an upgraded version of the chatbot called Bolbhav Plus, which Gramhal launched in April 2023…(More)”

AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine


Article by Amit Katwala: “Robots, computers, and algorithms are hunting for potential new therapies in ways humans can’t—by processing huge volumes of data and building previously unimagined molecules. At an old biscuit factory in South London, giant mixers and industrial ovens have been replaced by robotic arms, incubators, and DNA sequencing machines.

James Field and his company LabGenius aren’t making sweet treats; they’re cooking up a revolutionary, AI-powered approach to engineering new medical antibodies. In nature, antibodies are the body’s response to disease and serve as the immune system’s front-line troops. They’re strands of protein that are specially shaped to stick to foreign invaders so that they can be flushed from the system. Since the 1980s, pharmaceutical companies have been making synthetic antibodies to treat diseases like cancer, and to reduce the chance of transplanted organs being rejected. But designing these antibodies is a slow process for humans—protein designers must wade through the millions of potential combinations of amino acids to find the ones that will fold together in exactly the right way, and then test them all experimentally, tweaking some variables to improve some characteristics of the treatment while hoping that doesn’t make it worse in other ways. “If you want to create a new therapeutic antibody, somewhere in this infinite space of potential molecules sits the molecule you want to find,” says Field, the founder and CEO of LabGenius…(More)”.

Market Power in Artificial Intelligence


Paper by Joshua S. Gans: “This paper surveys the relevant existing literature that can help researchers and policy makers understand the drivers of competition in markets that constitute the provision of artificial intelligence products. The focus is on three broad markets: training data, input data, and AI predictions. It is shown that a key factor in determining the emergence and persistence of market power will be the operation of markets for data that would allow for trading data across firm boundaries…(More)”.

Predicting IMF-Supported Programs: A Machine Learning Approach


Paper by Tsendsuren Batsuuri, Shan He, Ruofei Hu, Jonathan Leslie and Flora Lutz: “This study applies state-of-the-art machine learning (ML) techniques to forecast IMF-supported programs, analyzes the ML prediction results relative to traditional econometric approaches, explores non-linear relationships among predictors indicative of IMF-supported programs, and evaluates model robustness with regard to different feature sets and time periods. ML models consistently outperform traditional methods in out-of-sample prediction of new IMF-supported arrangements with key predictors that align well with the literature and show consensus across different algorithms. The analysis underscores the importance of incorporating a variety of external, fiscal, real, and financial features as well as institutional factors like membership in regional financing arrangements. The findings also highlight the varying influence of data processing choices such as feature selection, sampling techniques, and missing data imputation on the performance of different ML models and therefore indicate the usefulness of a flexible, algorithm-tailored approach. Additionally, the results reveal that models that are most effective in near and medium-term predictions may tend to underperform over the long term, thus illustrating the need for regular updates or more stable – albeit potentially near-term suboptimal – models when frequent updates are impractical…(More)”.

Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance


Report by the National Academies of Sciences, Engineering, and Medicine: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Why we’re fighting to make sure labor unions have a voice in how AI is implemented


Article by Liz Shuler and Mike Kubzansky: “Earlier this month, Google’s co-founder admitted that the company had “definitely messed up” after its AI tool, Gemini, produced historically inaccurate images—including depictions of racially diverse Nazis. Sergey Brin cited a lack of “thorough testing” of the AI tool, but the incident is a good reminder that, despite all the hype around generative AI replacing human output, the technology still has a long way to go. 

Of course, that hasn’t stopped companies from deploying AI in the workplace. Some even use the technology as an excuse to lay workers off. Since last May, at least 4,000 people have lost their jobs to AI, and 70% of workers across the country live with the fear that AI is coming for theirs next. And while the technology may still be in its infancy, it’s developing fast. Earlier this year, AI pioneer Mustafa Suleyman said that “left completely to the market and to their own devices, [AI tools are] fundamentally labor-replacing.” Without changes now, AI could be coming to replace a lot of people’s jobs.

It doesn’t have to be this way. AI has enormous potential to build prosperity and unleash human creativity, but only if it also works for working people. Ensuring that happens requires giving the voice of workers—the people who will engage with these technologies every day, and whose lives, health, and livelihoods are increasingly affected by AI and automation—a seat at the decision-making table. 

As president of the AFL-CIO, representing 12.5 million working people across 60 unions, and CEO of Omidyar Network, a social change philanthropy that supports responsible technology, we believe that the single best movement to give everyone a voice is the labor movement. Empowering workers—from warehouse associates to software engineers—is the most powerful tactic we have to ensure that AI develops in the interests of the many, not the few…(More)”.