Gen AI: too much spend, too little benefit?


Article by Jason Koebler: “Investment giant Goldman Sachs published a research paper about the economic viability of generative AI which notes that there is “little to show for” the huge amount of spending on generative AI infrastructure and questions “whether this large spend will ever pay off in terms of AI benefits and returns.” 

The paper, called “Gen AI: too much spend, too little benefit?” is based on a series of interviews with Goldman Sachs economists and researchers, MIT professor Daron Acemoglu, and infrastructure experts. The paper ultimately questions whether generative AI will ever become the transformative technology that Silicon Valley and large portions of the stock market are currently betting on, but says investors may continue to get rich anyway. “Despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst,” the paper notes. 

Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity (which necessarily means automation, layoffs, lower labor costs, and higher efficiency). These stock gains are already baked in, Goldman Sachs argues in the paper: “Although the productivity pick-up that AI promises could benefit equities via higher profit growth, we find that stocks often anticipate higher productivity growth before it materializes, raising the risk of overpaying. And using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.”…(More)

Protecting Policy Space for Indigenous Data Sovereignty Under International Digital Trade Law


Paper by Andrew D. Mitchell and Theo Samlidis: “The impact of economic agreements on Indigenous peoples’ broader rights and interests has been subject to ongoing scrutiny. Technological developments and an increasing emphasis on Indigenous sovereignty within the digital domain have given rise to a global Indigenous data sovereignty movement, surfacing concerns about how international economic law impacts Indigenous peoples’ sovereignty over their data. This Article examines the policy space certain governments have reserved under international economic agreements to introduce measures for protecting Indigenous data or digital sovereignty (IDS). We argue that treaty countries have secured, under recent international digital trade chapters and agreements, the benefits of a comprehensive economic treaty and sufficient regulatory autonomy to protect Indigenous data sovereignty…(More)”

The era of predictive AI Is almost over


Essay by Dean W. Ball: “Artificial intelligence is a Rorschach test. When OpenAI’s GPT-4 was released in March 2023, Microsoft researchers triumphantly, and prematurely, announced that it possessed “sparks” of artificial general intelligence. Cognitive scientist Gary Marcus, on the other hand, argued that Large Language Models like GPT-4 are nowhere close to the loosely defined concept of AGI. Indeed, Marcus is skeptical of whether these models “understand” anything at all. They “operate over ‘fossilized’ outputs of human language,” he wrote in a 2023 paper, “and seem capable of implementing some automatic computations pertaining to distributional statistics, but are incapable of understanding due to their lack of generative world models.” The “fossils” to which Marcus refers are the models’ training data — these days, something close to all the text on the Internet.

This notion — that LLMs are “just” next-word predictors based on statistical models of text — is so common now as to be almost a trope. It is used, both correctly and incorrectly, to explain the flaws, biases, and other limitations of LLMs. Most importantly, it is used by AI skeptics like Marcus to argue that there will soon be diminishing returns from further LLM development: We will get better and better statistical approximations of existing human knowledge, but we are not likely to see another qualitative leap toward “general intelligence.”

There are two problems with this deflationary view of LLMs. The first is that next-word prediction, at sufficient scale, can lead models to capabilities that no human designed or even necessarily intended — what some call “emergent” capabilities. The second problem is that increasingly — and, ironically, starting with ChatGPT — language models employ techniques that combust the notion of pure next-word prediction of Internet text…(More)”

Your Driving App Is Leading You Astray


Article by Julia Angwin: “…If you use a navigation app, you probably have felt helpless anger when your stupid phone endangers your life, and the lives of all the drivers around you, to potentially shave a minute or two from your drive time. Or maybe it’s stuck you on an ugly freeway when a glorious, ocean-hugging alternative lies a few miles away. Or maybe it’s trapped you on a route with no four-way stops, ignoring a less stressful solution that doesn’t leave you worried about a car barreling out of nowhere.

For all the discussion of the many extraordinary ways algorithms have changed our society and our lives, one of the most impactful, and most infuriating, often escapes notice. Dominated by a couple of enormously powerful tech monopolists that have better things to worry about, our leading online mapping systems from Google and Apple are not nearly as good as they could be.

You may have heard the extreme stories, such as when navigation apps like Waze and Google Maps apparently steered drivers into lakes and onto impassable dirt roads, or when jurisdictions beg Waze to stop dumping traffic onto their residential streets. But the reality is these apps affect us, our roads and our communities every minute of the day. Primarily programmed to find the fastest route, they endanger and infuriate us on a remarkably regular basis….

The best hope for competition relies on the success of OpenStreetMap. Its data underpins most maps other than Google, including AmazonFacebook and Apple, but it is so under-resourced that it only recently hired paid systems administrators to ensure its back-end machines kept running….In addition, we can promote competition by using the few available alternatives. To navigate cities with public transit, try apps such as Citymapper that offer bike, transit and walking directions. Or use the privacy-focused Organic Maps…(More)”.

Scaling Synthetic Data Creation with 1,000,000,000 Personas


Paper by Xin Chan, et al: “We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub — a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world’s total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub’s use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development…(More)”.

Collaborating with Journalists and AI: Leveraging Social Media Images for Enhanced Disaster Resilience and Recovery


Paper by Murthy Dhiraj et al: “Methods to meaningfully integrate journalists into crisis informatics remain lacking. We explored the feasibility of generating a real-time, priority-driven map of infrastructure damage during a natural disaster by strategically selecting journalist networks to identify sources of image-based infrastructure-damage data. Using the REST Twitter API, 1,000,522 tweets were collected from September 13-18, 2018, during and after Hurricane Florence made landfall in the United States. Tweets were classified by source (e.g., news organizations or citizen journalists), and 11,638 images were extracted. We utilized Google’s AutoML Vision software to successfully develop a machine learning image classification model to interpret this sample of images. As a result, 80% of our labeled data was used for training, 10% for validation, and 10% for testing. The model achieved an average precision of 90.6%, an average recall of 77.2%, and an F1 score of .834. In the future, establishing strategic networks of journalists ahead of disasters will reduce the time needed to identify disaster-response targets, thereby focusing relief and recovery efforts in real-time. This approach ultimately aims to save lives and mitigate harm…(More)”.

A new index is using AI tools to measure U.S. economic growth in a broader way


Article by Jeff Cox: “Measuring the strength of the sprawling U.S. economy is no easy task, so one firm is sending artificial intelligence in to do the job.

The Zeta Economic Index, launched Monday, uses generative AI to analyze what its developers call “trillions of behavioral signals,” largely focused on consumer activity, to score growth on both a broad level of health and a separate measure on stability.

At its core, the index will gauge online and offline activity across eight categories, aiming to give a comprehensive look that incorporates standard economic data points such as unemployment and retail sales combined with high-frequency information for the AI age.

“The algorithm is looking at traditional economic indicators that you would normally look at. But then inside of our proprietary algorithm, we’re ingesting the behavioral data and transaction data of 240 million Americans, which nobody else has,” said David Steinberg, co-founder, chairman and CEO of Zeta Global.

“So instead of looking at the data in the rearview mirror like everybody else, we’re trying to put it out in advance to give a 30-day advanced snapshot of where the economy is going,” he added…(More)”.

Exploring Digital Biomarkers for Depression Using Mobile Technology


Paper by Yuezhou Zhang et al: “With the advent of ubiquitous sensors and mobile technologies, wearables and smartphones offer a cost-effective means for monitoring mental health conditions, particularly depression. These devices enable the continuous collection of behavioral data, providing novel insights into the daily manifestations of depressive symptoms.

We found several significant links between depression severity and various behavioral biomarkers: elevated depression levels were associated with diminished sleep quality (assessed through Fitbit metrics), reduced sociability (approximated by Bluetooth), decreased levels of physical activity (quantified by step counts and GPS data), a slower cadence of daily walking (captured by smartphone accelerometers), and disturbances in circadian rhythms (analyzed across various data streams).
Leveraging digital biomarkers for assessing and continuously monitoring depression introduces a new paradigm in early detection and development of customized intervention strategies. Findings from these studies not only enhance our comprehension of depression in real-world settings but also underscore the potential of mobile technologies in the prevention and management of mental health issues…(More)”

Building an AI ecosystem in a small nation: lessons from Singapore’s journey to the forefront of AI


Paper by Shaleen Khanal, Hongzhou Zhang & Araz Taeihagh: “Artificial intelligence (AI) is arguably the most transformative technology of our time. While all nations would like to mobilize their resources to play an active role in AI development and utilization, only a few nations, such as the United States and China, have the resources and capacity to do so. If so, how can smaller or less resourceful countries navigate the technological terrain to emerge at the forefront of AI development? This research presents an in-depth analysis of Singapore’s journey in constructing a robust AI ecosystem amidst the prevailing global dominance of the United States and China. By examining the case of Singapore, we argue that by designing policies that address risks associated with AI development and implementation, smaller countries can create a vibrant AI ecosystem that encourages experimentation and early adoption of the technology. In addition, through Singapore’s case, we demonstrate the active role the government can play, not only as a policymaker but also as a steward to guide the rest of the economy towards the application of AI…(More)”.