UNESCO’s Quest to Save the World’s Intangible Heritage


Article by Julian Lucas: “For decades, the organization has maintained a system that protects everything from Ukrainian borscht to Jamaican reggae. But what does it mean to “safeguard” living culture?…On December 7th, at a safari resort in Kasane, Botswana, Ukraine briefed the United Nations Educational, Scientific, and Cultural Organization (UNESCO) on an endangered national treasure. It wasn’t a monastery menaced by air strikes. Nor was it any of the paintings, rare books, or other antiquities seized by Russian troops. It was borscht, a beet soup popular for centuries across Eastern Europe. Shortly after Russia invaded Ukraine, in February, 2022—as fields burned, restaurants shuttered, and expert cooks fled their homes—Kyiv successfully petitioned UNESCO to add its culture of borscht-making to the List of Intangible Cultural Heritage in Need of Urgent Safeguarding. Now, despite setbacks on the battlefield, the state of the soup was strong. A Ukrainian official reported on her government’s new borscht-related initiatives, such as hosting gastronomic festivals and inventorying vulnerable recipes. She looked forward to borscht’s graduation from Urgent Safeguarding to the Representative List of the Intangible Cultural Heritage (I.C.H.) of Humanity—which grew, that session, to include Italian opera singing, Bangladeshi rickshaw painting, Angolan sand art, and Peruvian ceviche…(More)”.

Synthetic Data and the Future of AI


Paper by Peter Lee: “The future of artificial intelligence (AI) is synthetic. Several of the most prominent technical and legal challenges of AI derive from the need to amass huge amounts of real-world data to train machine learning (ML) models. Collecting such real-world data can be highly difficult and can threaten privacy, introduce bias in automated decision making, and infringe copyrights on a massive scale. This Article explores the emergence of a seemingly paradoxical technical creation that can mitigate—though not completely eliminate—these concerns: synthetic data. Increasingly, data scientists are using simulated driving environments, fabricated medical records, fake images, and other forms of synthetic data to train ML models. Artificial data, in other words, is being used to train artificial intelligence. Synthetic data offers a host of technical and legal benefits; it promises to radically decrease the cost of obtaining data, sidestep privacy issues, reduce automated discrimination, and avoid copyright infringement. Alongside such promise, however, synthetic data offers perils as well. Deficiencies in the development and deployment of synthetic data can exacerbate the dangers of AI and cause significant social harm.

In light of the enormous value and importance of synthetic data, this Article sketches the contours of an innovation ecosystem to promote its robust and responsible development. It identifies three objectives that should guide legal and policy measures shaping the creation of synthetic data: provisioning, disclosure, and democratization. Ideally, such an ecosystem should incentivize the generation of high-quality synthetic data, encourage disclosure of both synthetic data and processes for generating it, and promote multiple sources of innovation. This Article then examines a suite of “innovation mechanisms” that can advance these objectives, ranging from open source production to proprietary approaches based on patents, trade secrets, and copyrights. Throughout, it suggests policy and doctrinal reforms to enhance innovation, transparency, and democratic access to synthetic data. Just as AI will have enormous legal implications, law and policy can play a central role in shaping the future of AI…(More)”.

The Future of Trust


Book by Ros Taylor: “In a society battered by economic, political, cultural and ecological collapse, where do we place our trust, now that it is more vital than ever for our survival? How has that trust – in our laws, our media, our governments – been lost, and how can it be won back? Examining the police, the rule of law, artificial intelligence, the 21st century city and social media, Ros Taylor imagines what life might be like in years to come if trust continues to erode.

Have conspiracy theories permanently damaged our society? Will technological advances, which require more and more of our human selves, ultimately be rejected by future generations? And in a world fast approaching irreversible levels of ecological damage, how can we trust the custodians of these institutions to do the right thing – even as humanity faces catastrophe?…(More)”.

Scaling Up Development Impact


Book by Isabel Guerrero with Siddhant Gokhale and Jossie Fahsbender: “Today, nearly one billion people lack electricity, over three billion lack clean water, and 750 million lack basic literacy skills. Many of these challenges could be solved with existing solutions, and technology enables us to reach the last mile like never before. Yet, few solutions attain the necessary scale to match the size of these challenges. Scaling Up Development Impact offers an analytical framework, a set of practical tools, and adaptive evaluation techniques to accompany the scaling process. It presents rich organizational experiences that showcase real-world journeys toward increased impact…(More)”.

Prompting Diverse Ideas: Increasing AI Idea Variance


Paper by Lennart Meincke, Ethan Mollick, and Christian Terwiesch: “Unlike routine tasks where consistency is prized, in creativity and innovation the goal is to create a diverse set of ideas. This paper delves into the burgeoning interest in employing Artificial Intelligence (AI) to enhance the productivity and quality of the idea generation process. While previous studies have found that the average quality of AI ideas is quite high, prior research also has pointed to the inability of AI-based brainstorming to create sufficient dispersion of ideas, which limits novelty and the quality of the overall best idea. Our research investigates methods to increase the dispersion in AI-generated ideas. Using GPT-4, we explore the effect of different prompting methods on Cosine Similarity, the number of unique ideas, and the speed with which the idea space gets exhausted. We do this in the domain of developing a new product development for college students, priced under $50. In this context, we find that (1) pools of ideas generated by GPT-4 with various plausible prompts are less diverse than ideas generated by groups of human subjects (2) the diversity of AI generated ideas can be substantially improved using prompt engineering (3) Chain-of-Thought (CoT) prompting leads to the highest diversity of ideas of all prompts we evaluated and was able to come close to what is achieved by groups of human subjects. It also was capable of generating the highest number of unique ideas of any prompt we studied…(More)”

All the News That’s Fit to Click: How Metrics Are Transforming the Work of Journalists


Book by Caitlin Petre: “Journalists today are inundated with data about which stories attract the most clicks, likes, comments, and shares. These metrics influence what stories are written, how news is promoted, and even which journalists get hired and fired. Do metrics make journalists more accountable to the public? Or are these data tools the contemporary equivalent of a stopwatch wielded by a factory boss, worsening newsroom working conditions and journalism quality? In All the News That’s Fit to Click, Caitlin Petre takes readers behind the scenes at the New York TimesGawker, and the prominent news analytics company Chartbeat to explore how performance metrics are transforming the work of journalism.

Petre describes how digital metrics are a powerful but insidious new form of managerial surveillance and discipline. Real-time analytics tools are designed to win the trust and loyalty of wary journalists by mimicking key features of addictive games, including immersive displays, instant feedback, and constantly updated “scores” and rankings. Many journalists get hooked on metrics—and pressure themselves to work ever harder to boost their numbers.

Yet this is not a simple story of managerial domination. Contrary to the typical perception of metrics as inevitably disempowering, Petre shows how some journalists leverage metrics to their advantage, using them to advocate for their professional worth and autonomy…(More)”.

Trust in AI companies drops to 35 percent in new study


Article by Filip Timotija: “Trust in artificial intelligence (AI) companies has dipped to 35 percent over a five-year period in the U.S., according to new data.

The data, released Tuesday by public relations firm Edelman, found that trust in AI companies also dropped globally by eight points, going from 61 percent to 53 percent. 

The dwindling confidence in the rapidly-developing tech industry comes as regulators in the U.S. and across the globe are brainstorming solutions on how to regulate the sector. 

When broken down my political party, researchers found Democrats showed the most trust in AI companies at 38 percent — compared to Republicans’ 24 percent and independents’ 25 percent, per the study.

Multiple factors contributed to the decline in trust toward the companies polled in the data, according to Justin Westcott, Edelman’s chair of global technology.

“Key among these are fears related to privacy invasion, the potential for AI to devalue human contributions, and apprehensions about unregulated technological leaps outpacing ethical considerations,” Westcott said, adding “the data points to a perceived lack of transparency and accountability in how AI companies operate and engage with societal impacts.”

Technology as a whole is losing its lead in trust among sectors, Edelman said, highlighting the key findings from the study.

“Eight years ago, technology was the leading industry in trust in 90 percent of the countries we study,” researchers wrote, referring to the 28 countries. “Now it is most trusted only in half.”

Westcott argued the findings should be a “wake up call” for AI companies to “build back credibility through ethical innovation, genuine community engagement and partnerships that place people and their concerns at the heart of AI developments.”

As for the impacts on the future for the industry as a whole, “societal acceptance of the technology is now at a crossroads,” he said, adding that trust in AI and the companies producing it should be seen “not just as a challenge, but an opportunity.”

Priorities, Westcott continued, should revolve around ethical practices, transparency and a “relentless focus” on the benefits to society AI can provide…(More)”.

Ukrainians Are Using an App to Return Home


Article by Yuliya Panfil and Allison Price: “Two years into Russia’s invasion of Ukraine, the human toll continues to mount. At least 11 million people have been displaced by heavy bombing, drone strikes, and combat, and well over a million homes have been damaged or destroyed. But just miles from the front lines of what is a conventional land invasion, something decidedly unconventional has been deployed to help restore Ukrainian communities.

Thousands of families whose homes have been hit by Russian shelling are using their smartphones to file compensation claims, access government funds, and begin to rebuild their homes. This innovation is part of eRecovery, the world’s first-ever example of a government compensation program for damaged or destroyed homes rolled out digitally, at scale, in the midst of a war. It’s one of the ways in which Ukraine’s tech-savvy government and populace have leaned into digital solutions to help counter Russian aggression with resilience and a speedier approach to reconstruction and recovery.

According to Ukraine’s Housing, Land and Property Technical Working Group, since its launch last summer eRecovery has processed more than 83,000 compensation claims for damaged or destroyed property and paid out more than 45,000. In addition, more than half a million Ukrainians have taken the first step in the compensation process by filing a property damage report through Ukraine’s e-government platform, Diia. eRecovery’s potential to transform the way governments get people back into their homes following a war, natural disaster, or other calamity is hard to overstate…(More)”.

Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy


Paper by Philipp Schoenegger, Indre Tuminauskaite, Peter S. Park, and Philip E. Tetlock: “Human forecasting accuracy in practice relies on the ‘wisdom of the crowd’ effect, in which predictions about future events are significantly improved by aggregating across a crowd of individual forecasters. Past work on the forecasting ability of large language models (LLMs) suggests that frontier LLMs, as individual forecasters, underperform compared to the gold standard of a human crowd forecasting tournament aggregate. In Study 1, we expand this research by using an LLM ensemble approach consisting of a crowd of twelve LLMs. We compare the aggregated LLM predictions on 31 binary questions to that of a crowd of 925 human forecasters from a three-month forecasting tournament. Our preregistered main analysis shows that the LLM crowd outperforms a simple no-information benchmark and is not statistically different from the human crowd. In exploratory analyses, we find that these two approaches are equivalent with respect to medium-effect-size equivalence bounds. We also observe an acquiescence effect, with mean model predictions being significantly above 50%, despite an almost even split of positive and negative resolutions. Moreover, in Study 2, we test whether LLM predictions (of GPT-4 and Claude 2) can be improved by drawing on human cognitive output. We find that both models’ forecasting accuracy benefits from exposure to the median human prediction as information, improving accuracy by between 17% and 28%: though this leads to less accurate predictions than simply averaging human and machine forecasts. Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments: via the simple, practically applicable method of forecast aggregation. This replicates the ‘wisdom of the crowd’ effect for LLMs, and opens up their use for a variety of applications throughout society…(More)”.

Unconventional data, unprecedented insights: leveraging non-traditional data during a pandemic


Paper by Kaylin Bolt et al: “The COVID-19 pandemic prompted new interest in non-traditional data sources to inform response efforts and mitigate knowledge gaps. While non-traditional data offers some advantages over traditional data, it also raises concerns related to biases, representativity, informed consent and security vulnerabilities. This study focuses on three specific types of non-traditional data: mobility, social media, and participatory surveillance platform data. Qualitative results are presented on the successes, challenges, and recommendations of key informants who used these non-traditional data sources during the COVID-19 pandemic in Spain and Italy….

Non-traditional data proved valuable in providing rapid results and filling data gaps, especially when traditional data faced delays. Increased data access and innovative collaborative efforts across sectors facilitated its use. Challenges included unreliable access and data quality concerns, particularly the lack of comprehensive demographic and geographic information. To further leverage non-traditional data, participants recommended prioritizing data governance, establishing data brokers, and sustaining multi-institutional collaborations. The value of non-traditional data was perceived as underutilized in public health surveillance, program evaluation and policymaking. Participants saw opportunities to integrate them into public health systems with the necessary investments in data pipelines, infrastructure, and technical capacity…(More)”.