Paper by Vincent Conitzer, et al: “Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans’ expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about ”collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions…(More)”.
We Need To Rewild The Internet
Article by Maria Farrell and Robin Berjon: “In the late 18th century, officials in Prussia and Saxony began to rearrange their complex, diverse forests into straight rows of single-species trees. Forests had been sources of food, grazing, shelter, medicine, bedding and more for the people who lived in and around them, but to the early modern state, they were simply a source of timber.
So-called “scientific forestry” was that century’s growth hacking. It made timber yields easier to count, predict and harvest, and meant owners no longer relied on skilled local foresters to manage forests. They were replaced with lower-skilled laborers following basic algorithmic instructions to keep the monocrop tidy, the understory bare.
Information and decision-making power now flowed straight to the top. Decades later when the first crop was felled, vast fortunes were made, tree by standardized tree. The clear-felled forests were replanted, with hopes of extending the boom. Readers of the American political anthropologist of anarchy and order, James C. Scott, know what happened next.
It was a disaster so bad that a new word, Waldsterben, or “forest death,” was minted to describe the result. All the same species and age, the trees were flattened in storms, ravaged by insects and disease — even the survivors were spindly and weak. Forests were now so tidy and bare, they were all but dead. The first magnificent bounty had not been the beginning of endless riches, but a one-off harvesting of millennia of soil wealth built up by biodiversity and symbiosis. Complexity was the goose that laid golden eggs, and she had been slaughtered…(More)”.
On the Manipulation of Information by Governments
Paper by Ariel Karlinsky and Moses Shayo: “Governmental information manipulation has been hard to measure and study systematically. We hand-collect data from official and unofficial sources in 134 countries to estimate misreporting of Covid mortality during 2020-21. We find that between 45%–55% of governments misreported the number of deaths. The lion’s share of misreporting cannot be attributed to a country’s capacity to accurately diagnose and report deaths. Contrary to some theoretical expectations, there is little evidence of governments exaggerating the severity of the pandemic. Misreporting is higher where governments face few social and institutional constraints, in countries holding elections, and in countries with a communist legacy…(More)”
The economic research policymakers actually need
Blog by Jed Kolko: “…The structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.
The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:
- New measures of the economy
- Broad literature reviews
- Analyses that directly quantify or simulate policy decisions.
If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.
The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:
- An analysis of which jobs could be done remotely. This was published in April 2020, near the start of the pandemic, and inspired much of the early understanding of the prevalence and inequities of remote work.
- An estimate of how much the weather affects monthly employment changes. This is increasingly important for separating underlying economic trends from short-term swings from unseasonable or extreme weather.
- A measure of supply chain conditions. This helped quantify the challenges of getting goods into the US and to their customers during the pandemic.
- Job postings data from Indeed (where I worked as chief economist prior to my government service) showed hiring needs more quickly and in more geographic and occupational detail than official government statistics.
- Market-rent data from Zillow. This provided a useful leading indicator of the housing component of official inflation measures…(More)”.
Technological Progress and Rent Seeking
Paper by Vincent Glode & Guillermo Ordoñez: “We model firms’ allocation of resources across surplus-creating (i.e., productive) and surplus-appropriating (i.e., rent-seeking) activities. Our model predicts that industry-wide technological advancements, such as recent progress in data collection and processing, induce a disproportionate and socially inefficient reallocation of resources toward surplus-appropriating activities. As technology improves, firms rely more on appropriation to obtain their profits, endogenously reducing the impact of technological progress on economic progress and inflating the price of the resources used for both types of activities. We apply our theoretical insights to shed light on the rise of high-frequency trading…(More)”,
Democracy and Artificial Intelligence: old problems, new solutions?
Discussion between Nardine Alnemr and Rob Weymouth: “…I see three big perspectives relevant to AI and democracy. You have the most conservative, mirroring the 80s and the 90s, still talking about the digital public sphere as if it’s distant from our lives. As if it’s something novel and inaccessible, which is not quite accurate anymore.
Then there’s the more optimistic and cautionary side of the spectrum. People who are excited about the technologies, but they’re not quite sure. They’re intrigued to see the potential and I think they’re optimistic because they overlook how these technologies connect to a broader context. How a lot of these technologies are driven by surveying and surveillance of the data and the communication that we produce. Exploitation of workers who do the filtering and cleaning work. The companies that profit out of this and make engineered election campaigns. So they’re cautious because of that, but still optimistic, because at the same time, they try to isolate it from that bigger context.
And finally, the most radical is something like Cesar Hidalgo’s proposal of augmented democracy…(More)”.
Crowdsourcing for collaborative crisis communication: a systematic review
Paper by Maria Clara Pestana, Ailton Ribeiro and Vaninha Vieira: “Efficient crisis response and support during emergency scenarios rely on collaborative communication channels. Effective communication between operational centers, civilian responders, and public institutions is vital. Crowdsourcing fosters communication and collaboration among a diverse public. The primary objective is to explore the state-of-the-art in crowdsourcing for collaborative crisis communication guided by a systematic literature review. The study selected 20 relevant papers published in the last decade. The findings highlight solutions to facilitate rapid emergency responses, promote seamless coordination between stakeholders and the general public, and ensure data credibility through a rigorous validation process…(More)”.
The Formalization of Social Precarities
Anthology edited by Murali Shanmugavelan and Aiha Nguyen: “…explores platformization from the point of view of precarious gig workers in the Majority World. In countries like Bangladesh, Brazil, and India — which reinforce social hierarchies via gender, race, and caste — precarious workers are often the most marginalized members of society. Labor platforms made familiar promises to workers in these countries: work would be democratized, and people would have the opportunity to be their own boss. Yet even as platforms have upended the legal relationship between worker and employer, they have leaned into social structures to keep workers precarious — and in fact formalized those social precarities through surveillance and data collection…(More)”.
A Brief History of Automations That Were Actually People
Article by Brian Contreras: “If you’ve ever asked a chatbot a question and received nonsensical gibberish in reply, you already know that “artificial intelligence” isn’t always very intelligent.
And sometimes it isn’t all that artificial either. That’s one of the lessons from Amazon’s recent decision to dial back its much-ballyhooed “Just Walk Out” shopping technology, a seemingly science-fiction-esque software that actually functioned, in no small part, thanks to behind-the-scenes human labor.
This phenomenon is nicknamed “fauxtomation” because it “hides the human work and also falsely inflates the value of the ‘automated’ solution,” says Irina Raicu, director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics.
Take Just Walk Out: It promises a seamless retail experience in which customers at Amazon Fresh groceries or third-party stores can grab items from the shelf, get billed automatically and leave without ever needing to check out. But Amazon at one point had more than 1,000 workers in India who trained the Just Walk Out AI model—and manually reviewed some of its sales—according to an article published last year on the Information, a technology business website.
An anonymous source who’d worked on the Just Walk Out technology told the outlet that as many as 700 human reviews were needed for every 1,000 customer transactions. Amazon has disputed the Information’s characterization of its process. A company representative told Scientific American that while Amazon “can’t disclose numbers,” Just Walk Out has “far fewer” workers annotating shopping data than has been reported. In an April 17 blog post, Dilip Kumar, vice president of Amazon Web Services applications, wrote that “this is no different than any other AI system that places a high value on accuracy, where human reviewers are common.”…(More)”
Global Contract-level Public Procurement Dataset
Paper by Mihály Fazekas et al: “One-third of total government spending across the globe goes to public procurement, amounting to about 10 trillion dollars a year. Despite its vast size and crucial importance for economic and political developments, there is a lack of globally comparable data on contract awards and tenders run. To fill this gap, this article introduces the Global Public Procurement Dataset (GPPD). Using web scraping methods, we collected official public procurement data on over 72 million contracts from 42 countries between 2006 and 2021 (time period covered varies by country due to data availability constraints). To overcome the inconsistency of data publishing formats in each country, we standardized the published information to fit a common data standard. For each country, key information is collected on the buyer(s) and supplier(s), geolocation information, product classification, price information, and details of the contracting process such as contract award date or the procedure type followed. GPPD is a contract-level dataset where specific filters are calculated allowing to reduce the dataset to the successfully awarded contracts if needed. We also add several corruption risk indicators and a composite corruption risk index for each contract which allows for an objective assessment of risks and comparison across time, organizations, or countries. The data can be reused to answer research questions dealing with public procurement spending efficiency among others. Using unique organizational identification numbers or organization names allows connecting the data to company registries to study broader topics such as ownership networks…(More)”.