AI Is Tearing Wikipedia Apart


Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.

The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.

Will A.I. Become the New McKinsey?


Essay by Ted Chiang: “When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America…(More)”.

Spamming democracy


Article by Natalie Alms: “The White House’s Office of Information and Regulatory Affairs is considering AI’s effect in the regulatory process, including the potential for generative chatbots to fuel mass campaigns or inject spam comments into the federal agency rulemaking process.

A recent executive order directed the office to consider using guidance or tools to address mass comments, computer-generated comments and falsely attributed comments, something an administration official told FCW that OIRA is “moving forward” on.

Mark Febrezio, a senior policy analyst at George Washington University’s Regulatory Studies Center, has experimented with Open AI’s generative AI system ChatGPT to create what he called a “convincing” public comment submission to a Labor Department proposal. 

“Generative AI also takes the possibility of mass and malattributed comments to the next level,” wrote Fabrizio and co-author Bridget Dooling, research professor at the center, in a paper published in April by the Brookings Institution.

The executive order comes years after astroturfing during the rollback of net neutrality policies by the Federal Communications Commission in 2017 garnered public attention. That rulemaking docket received a record-breaking 22 million-plus comments, but over 8.5 million came from a campaign against net neutrality led by broadband companies, according to an investigation by the New York Attorney General released in 2021. 

The investigation found that lead generators paid by these companies submitted many comments with real names and addresses attached without the knowledge or consent of those individuals.  In the same docket were over 7 million comments supporting net neutrality submitted by a computer science student, who used software to submit comments attached to computer-generated names and addresses.

While the numbers are staggering, experts told FCW that agencies aren’t just counting comments when reading through submissions from the public…(More)”

Unlocking the Power of Data Refineries for Social Impact


Essay by Jason Saul & Kriss Deiglmeier: “In 2021, US companies generated $2.77 trillion in profits—the largest ever recorded in history. This is a significant increase since 2000 when corporate profits totaled $786 billion. Social progress, on the other hand, shows a very different picture. From 2000 to 2021, progress on the United Nations Sustainable Development Goals has been anemic, registering less than 10 percent growth over 20 years.

What explains this massive split between the corporate and the social sectors? One explanation could be the role of data. In other words, companies are benefiting from a culture of using data to make decisions. Some refer to this as the “data divide”—the increasing gap between the use of data to maximize profit and the use of data to solve social problems…

Our theory is that there is something more systemic going on. Even if nonprofit practitioners and policy makers had the budget, capacity, and cultural appetite to use data; does the data they need even exist in the form they need it? We submit that the answer to this question is a resounding no. Usable data doesn’t yet exist for the sector because the sector lacks a fully functioning data ecosystem to create, analyze, and use data at the same level of effectiveness as the commercial sector…(More)”.

The Curious Side Effects of Medical Transparency


Essay by Danielle Ofri: “Transparency, Pozen told me, “invites conceptual confusion about whether it’s a first-order good that we’re trying to pursue for its own sake, or a second-order good that we’re trying to use instrumentally to achieve other goods.” In the first case, we might feel that transparency is an ideal always worth embracing, whatever the costs. In the second, we might ask ourselves what it’s accomplishing, and how it compares with other routes to the same end.

“There is a standard view that transparency is all good—the more transparency, the better,” the philosopher C. Thi Nguyen, an associate professor at the University of Utah, told me. But “you have a completely different experience of transparency when you are the subject.” In a previous position, Nguyen had been part of a department that had to provide evidence that it was using state funding to satisfactorily educate its students. Philosophers, he told me, would want to describe their students’ growing reflectiveness, curiosity, and “intellectual humility,” but knew that this kind of talk would likely befuddle or bore legislators; they had to focus instead on concrete numbers, such as graduation rates and income after graduation. Nguyen and his colleagues surely want their students to graduate and earn a living wage, but such stats hardly sum up what it means to be a successful philosopher.

In Nguyen’s view, this illustrates a problem with transparency. “In any scheme of transparency in which you have experts being transparent to nonexperts, you’re going to get a significant amount of information loss,” he said. What’s meaningful in a philosophy department can be largely incomprehensible to non-philosophers, so the information must be recast in simplified terms. Furthermore, simplified metrics frequently distort incentives. If graduation rates are the metric by which funding is determined, then a school might do whatever it takes to bolster them. Although some of these efforts might add value to students’ learning, it’s also possible to game the system in ways that are counterproductive to actual education.

Transparency is often portrayed as objective, but, like a camera, it is subject to manipulation even as it appears to be relaying reality. Ida Koivisto, a legal scholar at the University of Helsinki, has studied the trade-offs that flow from who holds that camera. She finds that when an authority—a government agency, a business, a public figure—elects to be transparent, people respond positively, concluding that the willingness to be open reflects integrity, and thus confers legitimacy. Since the authority has initiated this transparency, however, it naturally chooses to be transparent in areas where it looks good. Voluntary transparency sacrifices a degree of truth. On the other hand, when transparency is initiated by outside forces—mandates, audits, investigations—both the good and the bad are revealed. Such involuntary transparency is more truthful, but it often makes its subject appear flawed and dishonest, and so less legitimate. There’s a trade-off, Koivisto concludes, between “legitimacy” and “the ‘naked truth.’ ”..(More)”.

Financing the Common Good


Article by Mariana Mazzucato: “…The international monetary system which emerged in the aftermath of World War II undoubtedly represented an important innovation. But its structure is no longer fit for purpose. The challenges we face today—from climate change to public-health crises—are complex, interrelated and global in nature. Our financial institutions must reflect this reality.

Because the financial system echoes the logic of the entire economic system, this will require a more fundamental change: we must broaden the economic thinking that has long underpinned institutional mandates. To shape the markets of the future, maximising public value in the process, we must embrace an entirely new economics.

Most economic thinking today assigns the state and multilateral actors responsibility for removing barriers to economic activity, de-risking trade and finance and levelling the playing-field for business. As a result, governments and international lenders tinker around the edges of markets, rather than doing what is actually needed—deliberately shaping the economic and financial system to advance the common good…(More)”.

What Was the Fact?


Essay by Jon Askonas: “…Centuries ago, our society buried profound differences of conscience, ideas, and faith, and in their place erected facts, which did not seem to rise or fall on pesky political and philosophical questions. But the power of facts is now waning, not because we don’t have enough of them but because we have so many. What is replacing the old hegemony of facts is not a better and more authoritative form of knowledge but a digital deluge that leaves us once again drifting apart.

As the old divisions come back into force, our institutions are haplessly trying to neutralize them. This project is hopeless — and so we must find another way. Learning to live together in truth even when the fact has lost its power is perhaps the most serious moral challenge of the twenty-first century…

Our understanding of what it means to know something about the world has comprehensively changed multiple times in history. It is very hard to get one’s mind fully around this.

In flux are not only the categories of knowable things, but also the kinds of things worth knowing and the limits of what is knowable. What one civilization finds intensely interesting — the horoscope of one’s birth, one’s weight in kilograms — another might find bizarre and nonsensical. How natural our way of knowing the world feels to us, and how difficult it is to grasp another language of knowledge, is something that Jorge Luis Borges tried to convey in an essay where he describes the Celestial Emporium of Benevolent Knowledge, a fictional Chinese encyclopedia that divides animals into “(a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, … (f) fabulous ones,” and the real-life Bibliographic Institute of Brussels, which created an internationally standardized decimal classification system that divided the universe into 1,000 categories, including 261: The Church; 263: The Sabbath; 267: Associations. Y. M. C. A., etc.; and 298: Mormonism…(More)”.

Has 21st century policy gone medieval?


Essay by Tim Harford: “Criminal justice has always been a source of knotty problems. How to punish the guilty while sparing the innocent? Trial by ordeal was a neat solution: delegate the decision to God. In the Middle Ages, a suspect who insisted on their innocence might be asked to carry a piece of burning iron for a few paces. If the suspect’s hand was unharmed, God had pronounced them innocent. If God is benevolent, omnipotent and highly interventionist, this idea works. Otherwise this judicial ordeal punishes innocent and guilty alike, inflicting harm without sorting good from bad.

Suella Braverman, the UK’s home secretary, and her “dream” of deporting asylum seekers to Rwanda, is an eerie 21st-century echo of a medieval idea. In a way, the comparison is unfair to the medieval courts. Judicial ordeals really were designed to solve a policy problem, while the government’s Rwanda rhetoric is designed to deflect attention from strikes, NHS waiting lists and a stagnating economy.

But in other ways the comparison is apt. Deporting migrants to Rwanda, or similar deliberate cruelties such as separating parents from their children at the US-Mexican border, might well be expected to deter some attempts to enter the country, while those fleeing murderous regimes would come regardless.

Many people, myself included, draw the line at “deliberate cruelties”. But public policy is full of ordeal-like interventions: long waits, arduous paperwork and deliberate stigma are all common policy tools. The economist Richard Zeckhauser of Harvard defines ordeals as “burdens placed on individuals which yield no benefits to others” and argues that such burdens can sometimes be an effective way of ensuring scarce benefits are targeted only to worthy recipients.

But do these ordeals really select the most deserving? Carolyn Heinrich, professor of public policy at Vanderbilt University, has studied South Africa’s Child Support Grant, with a series of bureaucratic ordeals requiring bewildering paperwork and long waits. The families who struggle with these ordeals are those who face longer journeys to the benefits office, or have a limited grasp of bureaucratese.

Heinrich found that because of these arbitrary distinctions, many families received less support than they were entitled to. Most interruptions to benefit payments were errors, and the children in the affected families would become adolescents who were more likely to engage in crime, alcohol abuse or risky sexual behaviour. The ordeal harmed the innocent, undermined the goals of the support grant and seems unlikely to have saved public funds.

Some ordeals are the result of incompetence, such as badly designed forms, or underfunded public services…(More)”.

Enhancing Trust in Science and Democracy in an Age of Misinformation 


Article by Marcia McNutt and Michael Crow: “Therefore, we believe the scientific community must more fully embrace its vital role in producing and disseminating knowledge in democratic societies. In Science in a Democratic Society, philosopher Philip Kitcher reminds us that “science should be shaped to promote democratic ideals.” To produce outcomes that advance the public good, scientists must also assess the moral bases of their pursuits. Although the United States has implemented the democratically driven, publicly engaged, scientific culture that Vannevar Bush outlined in Science, the Endless Frontier in 1945, Kitcher’s moral message remains relevant to both conducting science and communicating the results to the public, which pays for much of the enterprise of scientific discovery and technological innovation. It’s on scientists to articulate the moral and public values of the knowledge that they produce in ways that can be understood by citizens and decisionmakers.

However, by organizing themselves largely into groups that rarely reach beyond their own disciplines and by becoming somewhat disconnected from their fellow citizens and from the values of society, many scientists have become less effective than will be necessary in the future. Scientific culture has often left informing or educating the public to other parties such as science teachers, journalists, storytellers, and filmmakers. Instead, scientists principally share the results of their research within the narrow confines of academic and disciplinary journals…(More)”.

What Makes People Act on Climate Change, according to Behavioral Science


Article by Andrea Thompson: “As the world hurtles toward a future with temperatures above the thresholds scientists say will lead to the worst climate disruptions, humanity needs to take all the actions it can—collectively and as individuals—to bring planet-warming emissions down as quickly as possible. Governments and companies need to do the lion’s share of the work, but ordinary people will also need to make changes in their everyday lives. A crucial question has been how best to spur people toward more climate-friendly behaviors, such as taking the bus instead of driving or reducing home energy use.

New research published in Proceedings of the National Academy of Sciences USA pooled the results of 430 individual studies that examined environment-related behaviors such as recycling or choosing a mode of transportation—and that looked into changing those behaviors through several interventions, including financial incentives and educational campaigns. The authors analyzed how six different types of interventions compared with one another in their ability to influence real-world behavior and at how five behaviors compared in terms of how easy they were to change.

As can be seen in the graphic below, financial incentives and social pressure worked better at changing behaviors than did education or feedback (for example, reports of one’s own electricity use). The results reinforced what environmental psychologists have found when looking at these interventions in isolation…(More)”.

Chart shows effect sizes of various intervention approaches for promoting sustainable behaviors, with education having the smallest effect and social comparison having the largest.
Credit: Amanda Montañez; Source: “Field Interventions for Climate Change Mitigation Behaviors: A Second-Order Meta-Analysis,” by Magnus Bergquist et al., in Proceedings of the National Academy of Sciences USA, Vol. 120, No. 13, Article No. e2214851120. Published online March 21, 2023