AI: a transformative force in maternal healthcare


Article by Afifa Waheed: “Artificial intelligence (AI) and robotics have enormous potential in healthcare and are quickly shifting the landscape – emerging as a transformative force. They offer a new dimension to the way healthcare professionals approach disease diagnosis, treatment and monitoring. AI is being used in healthcare to help diagnose patients, for drug discovery and development, to improve physician-patient communication, to transcribe voluminous medical documents, and to analyse genomics and genetics. Labs are conducting research work faster than ever before, work that otherwise would have taken decades without the assistance of AI. AI-driven research in life sciences has included applications looking to address broad-based areas, such as diabetes, cancer, chronic kidney disease and maternal health.

In addition to increasing the knowledge of access to postnatal and neonatal care, AI can predict the risk of adverse events in antenatal and postnatal women and their neonatal care. It can be trained to identify those at risk of adverse events, by using patients’ health information such as nutrition status, age, existing health conditions and lifestyle factors. 

AI can further be used to improve access to women located in rural areas with a lack of trained professionals – AI-enabled ultrasound can assist front-line workers with image interpretation for a comprehensive set of obstetrics measurements, increasing quality access to early foetal ultrasound scans. The use of AI assistants and chatbots can also improve pregnant mothers’ experience by helping them find available physicians, schedule appointments and even answer some patient questions…

Many healthcare professionals I have spoken to emphasised that pre-existing conditions such as high blood pressure that leads to preeclampsia, iron deficiency, cardiovascular disease, age-related issues for those over 35, various other existing health conditions, and failure in the progress of labour that might lead to Caesarean (C-section), could all cause maternal deaths. Training AI models to detect these diseases early on and accurately for women could prove to be beneficial. AI algorithms can leverage advanced algorithms, machine learning (ML) techniques, and predictive models to enhance decision-making, optimise healthcare delivery, and ultimately improve patient outcomes in foeto-maternal health…(More)”.

Against Elections: The Lottocratic Alternative


Paper by Alexander A. Guerrero: “It is widely accepted that electoral representative democracy is better — along a number of different normative dimensions — than any other alternative lawmaking political arrangement. It is not typically seen as much of a competition: it is also widely accepted that the only legitimate alternative to electoral representative democracy is some form of direct democracy, but direct democracy — we are told — would lead to bad policy. This article makes the case that there is a legitimate alternative system — one that uses lotteries, not elections, to select political officials — that would be better than electoral representative democracy. Part I diagnoses two significant failings of modern-day systems of electoral representative government: the failure of responsiveness and the failure of good governance. The argument offered suggests that these flaws run deep, so that even significant and politically unlikely reforms with respect to campaign finance and election law would make little difference. Although my distillation of the argument is novel, the basic themes will likely be familiar. I anticipate the initial response to the argument may be familiar as well: the Churchillian shrug. Parts II, III, and IV of this article represent the beginning of an effort to move past that response, to think about alternative political systems that might avoid some of the problems with the electoral representative system without introducing new and worse problems. In the second and third parts of the article, I outline an alternative political system, the lottocratic system, and present some of the virtues of such a system. In the fourth part of the article, I consider some possible problems for the system. The overall aims of this article are to raise worries for electoral systems of government, to present the lottocratic system and to defend the view that this system might be a normatively attractive alternative, removing a significant hurdle to taking a non-electoral system of government seriously as a possible improvement to electoral democracy…(More)”

Gen AI: too much spend, too little benefit?


Article by Jason Koebler: “Investment giant Goldman Sachs published a research paper about the economic viability of generative AI which notes that there is “little to show for” the huge amount of spending on generative AI infrastructure and questions “whether this large spend will ever pay off in terms of AI benefits and returns.” 

The paper, called “Gen AI: too much spend, too little benefit?” is based on a series of interviews with Goldman Sachs economists and researchers, MIT professor Daron Acemoglu, and infrastructure experts. The paper ultimately questions whether generative AI will ever become the transformative technology that Silicon Valley and large portions of the stock market are currently betting on, but says investors may continue to get rich anyway. “Despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst,” the paper notes. 

Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity (which necessarily means automation, layoffs, lower labor costs, and higher efficiency). These stock gains are already baked in, Goldman Sachs argues in the paper: “Although the productivity pick-up that AI promises could benefit equities via higher profit growth, we find that stocks often anticipate higher productivity growth before it materializes, raising the risk of overpaying. And using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.”…(More)

Bringing Communities In, Achieving AI for All


Article by Shobita Parthasarathy and Jared Katzman: “…To this end, public and philanthropic research funders, universities, and the tech industry should be seeking out partnerships with struggling communities, to learn what they need from AI and build it. Regulators, too, should have their ears to the ground, not just the C-suite. Typical members of a marginalized community—or, indeed, any nonexpert community—may not know the technical details of AI, but they understand better than anyone else the power imbalances at the root of concerns surrounding AI bias and discrimination. And so it is from communities marginalized by AI, and from scholars and organizations focused on understanding and ameliorating social disadvantage, that AI designers and regulators most need to hear.

Progress toward AI equity begins at the agenda-setting stage, when funders, engineers, and corporate leaders make decisions about research and development priorities. This is usually seen as a technical or management task, to be carried out by experts who understand the state of scientific play and the unmet needs of the market… A heartening example comes from Carnegie Mellon University, where computer scientists worked with residents in the institution’s home city of Pittsburgh to build a technology that monitored and visualized local air quality. The collaboration began when researchers attended community meetings where they heard from residents who were suffering the effects of air pollution from a nearby factory. The residents had struggled to get the attention of local and national officials because they were unable to provide the sort of data that would motivate interest in their case. The researchers got to work on prototype systems that could produce the needed data and refined their technology in response to community input. Eventually their system brought together heterogeneous information, including crowdsourced smell reports, video footage of factory smokestacks, and air-quality and wind data, which the residents then submitted to government entities. After reviewing the data, administrators at the Environmental Protection Agency agreed to review the factory’s compliance, and within a year the factory’s parent company announced that the facility would close…(More)”.

Finding, distinguishing, and understanding overlooked policy entrepreneurs


Paper by Gwen Arnold, Meghan Klasic, Changtong Wu, Madeline Schomburg & Abigail York: “Scholars have spent decades arguing that policy entrepreneurs, change agents who work individually and in groups to influence the policy process, can be crucial in introducing policy innovation and spurring policy change. How to identify policy entrepreneurs empirically has received less attention. This oversight is consequential because scholars trying to understand when policy entrepreneurs emerge, and why, and what makes them more or less successful, need to be able to identify these change agents reliably and accurately. This paper explores the ways policy entrepreneurs are currently identified and highlights issues with current approaches. We introduce a new technique for eliciting and distinguishing policy entrepreneurs, coupling automated and manual analysis of local news media and a survey of policy entrepreneur candidates. We apply this technique to the empirical case of unconventional oil and gas drilling in Pennsylvania and derive some tentative results concerning factors which increase entrepreneurial efficacy…(More)”.

Protecting Policy Space for Indigenous Data Sovereignty Under International Digital Trade Law


Paper by Andrew D. Mitchell and Theo Samlidis: “The impact of economic agreements on Indigenous peoples’ broader rights and interests has been subject to ongoing scrutiny. Technological developments and an increasing emphasis on Indigenous sovereignty within the digital domain have given rise to a global Indigenous data sovereignty movement, surfacing concerns about how international economic law impacts Indigenous peoples’ sovereignty over their data. This Article examines the policy space certain governments have reserved under international economic agreements to introduce measures for protecting Indigenous data or digital sovereignty (IDS). We argue that treaty countries have secured, under recent international digital trade chapters and agreements, the benefits of a comprehensive economic treaty and sufficient regulatory autonomy to protect Indigenous data sovereignty…(More)”

The Digital Economy Report 2024


Report by UNCTAD: “…underscores the urgent need for environmentally sustainable and inclusive digitalization strategies.

Digital technology and infrastructure depend heavily on raw materials, and the production and disposal of more and more devices, along with growing water and energy needs are taking an increasing toll on the planet.

For example, the production and use of digital devices, data centres and information and communications technology (ICT) networks account for an estimated 6% to 12% of global electricity use.

Developing countries bear the brunt of the environmental costs of digitalization while reaping fewer benefits. They export low value-added raw materials and import high value-added devices, along with increasing digital waste. Geopolitical tensions over critical minerals, abundant in many of these countries, complicate the challenges.

The report calls for bold action from policymakers, industry leaders and consumers. It urges a global shift towards a circular digital economy, focusing on circularity by design through durable products, responsible consumption, reuse and recycling, and sustainable business models…(More)”.

The era of predictive AI Is almost over


Essay by Dean W. Ball: “Artificial intelligence is a Rorschach test. When OpenAI’s GPT-4 was released in March 2023, Microsoft researchers triumphantly, and prematurely, announced that it possessed “sparks” of artificial general intelligence. Cognitive scientist Gary Marcus, on the other hand, argued that Large Language Models like GPT-4 are nowhere close to the loosely defined concept of AGI. Indeed, Marcus is skeptical of whether these models “understand” anything at all. They “operate over ‘fossilized’ outputs of human language,” he wrote in a 2023 paper, “and seem capable of implementing some automatic computations pertaining to distributional statistics, but are incapable of understanding due to their lack of generative world models.” The “fossils” to which Marcus refers are the models’ training data — these days, something close to all the text on the Internet.

This notion — that LLMs are “just” next-word predictors based on statistical models of text — is so common now as to be almost a trope. It is used, both correctly and incorrectly, to explain the flaws, biases, and other limitations of LLMs. Most importantly, it is used by AI skeptics like Marcus to argue that there will soon be diminishing returns from further LLM development: We will get better and better statistical approximations of existing human knowledge, but we are not likely to see another qualitative leap toward “general intelligence.”

There are two problems with this deflationary view of LLMs. The first is that next-word prediction, at sufficient scale, can lead models to capabilities that no human designed or even necessarily intended — what some call “emergent” capabilities. The second problem is that increasingly — and, ironically, starting with ChatGPT — language models employ techniques that combust the notion of pure next-word prediction of Internet text…(More)”

Fixing frictions: ‘sludge audits’ around the world


OECD Report: “Governments worldwide are increasingly adopting behavioural science methodologies to address “sludge” – the unjustified frictions impeding people’ access to government services and exacerbating psychological burdens. Sludge audits, grounded in behavioural science, provide a structured approach for identifying, quantifying, and preventing sludge in public services and government processes. This document delineates Good Practice Principles, derived from ten case studies conducted during the International Sludge Academy, aimed at promoting the integration of sludge audit methodologies into public governance and service design. By enhancing government efficiency and bolstering public trust in government, these principles contribute to the broader agenda on administrative simplification, digital services, and public sector innovation…(More)”.