Paper by Orestis Loukas, and Ho-Ryun Chung: “Computer-based decision systems are widely used to automate decisions in many aspects of everyday life, which include sensitive areas like hiring, loaning and even criminal sentencing. A decision pipeline heavily relies on large volumes of historical real-world data for training its models. However, historical training data often contains gender, racial or other biases which are propagated to the trained models influencing computer-based decisions. In this work, we propose a robust methodology that guarantees the removal of unwanted biases while maximally preserving classification utility. Our approach can always achieve this in a model-independent way by deriving from real-world data the asymptotic dataset that uniquely encodes demographic parity and realism. As a proof-of-principle, we deduce from public census records such an asymptotic dataset from which synthetic samples can be generated to train well-established classifiers. Benchmarking the generalization capability of these classifiers trained on our synthetic data, we confirm the absence of any explicit or implicit bias in the computer-aided decision…(More)”.
Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence
Paper by Andres Karjus: “The increasing capacities of large language models (LLMs) present an unprecedented opportunity to scale up data analytics in the humanities and social sciences, augmenting and automating qualitative analytic tasks previously typically allocated to human labor. This contribution proposes a systematic mixed methods framework to harness qualitative analytic expertise, machine scalability, and rigorous quantification, with attention to transparency and replicability. 16 machine-assisted case studies are showcased as proof of concept. Tasks include linguistic and discourse analysis, lexical semantic change detection, interview analysis, historical event cause inference and text mining, detection of political stance, text and idea reuse, genre composition in literature and film; social network inference, automated lexicography, missing metadata augmentation, and multimodal visual cultural analytics. In contrast to the focus on English in the emerging LLM applicability literature, many examples here deal with scenarios involving smaller languages and historical texts prone to digitization distortions. In all but the most difficult tasks requiring expert knowledge, generative LLMs can demonstrably serve as viable research instruments. LLM (and human) annotations may contain errors and variation, but the agreement rate can and should be accounted for in subsequent statistical modeling; a bootstrapping approach is discussed. The replications among the case studies illustrate how tasks previously requiring potentially months of team effort and complex computational pipelines, can now be accomplished by an LLM-assisted scholar in a fraction of the time. Importantly, this approach is not intended to replace, but to augment researcher knowledge and skills. With these opportunities in sight, qualitative expertise and the ability to pose insightful questions have arguably never been more critical…(More)”.
Missing Persons: The Case of National AI Strategies
Article by Susan Ariel Aaronson and Adam Zable: “Policy makers should inform, consult and involve citizens as part of their efforts to data-driven technologies such as artificial intelligence (AI). Although many users rely on AI systems, they do not understand how these systems use their data to make predictions and recommendations that can affect their daily lives. Over time, if they see their data being misused, users may learn to distrust both the systems and how policy makers regulate them. This paper examines whether officials informed and consulted their citizens as they developed a key aspect of AI policy — national AI strategies. Building on a data set of 68 countries and the European Union, the authors used qualitative methods to examine whether, how and when governments engaged with their citizens on their AI strategies and whether they were responsive to public comment, concluding that policy makers are missing an opportunity to build trust in AI by not using this process to involve a broader cross-section of their constituents…(More)”.
These Prisoners Are Training AI
Article by Morgan Meaker: “…Around the world, millions of so-called “clickworkers” train artificial intelligence models, teaching machines the difference between pedestrians and palm trees, or what combination of words describe violence or sexual abuse. Usually these workers are stationed in the global south, where wages are cheap. OpenAI, for example, uses an outsourcing firm that employs clickworkers in Kenya, Uganda, and India. That arrangement works for American companies, operating in the world’s most widely spoken language, English. But there are not a lot of people in the global south who speak Finnish.
That’s why Metroc turned to prison labor. The company gets cheap, Finnish-speaking workers, while the prison system can offer inmates employment that, it says, prepares them for the digital world of work after their release. Using prisoners to train AI creates uneasy parallels with the kind of low-paid and sometimes exploitive labor that has often existed downstream in technology. But in Finland, the project has received widespread support.
“There’s this global idea of what data labor is. And then there’s what happens in Finland, which is very different if you look at it closely,” says Tuukka Lehtiniemi, a researcher at the University of Helsinki, who has been studying data labor in Finnish prisons.
For four months, Marmalade has lived here, in Hämeenlinna prison. The building is modern, with big windows. Colorful artwork tries to enforce a sense of cheeriness on otherwise empty corridors. If it wasn’t for the heavy gray security doors blocking every entry and exit, these rooms could easily belong to a particularly soulless school or university complex.
Finland might be famous for its open prisons—where inmates can work or study in nearby towns—but this is not one of them. Instead, Hämeenlinna is the country’s highest-security institution housing exclusively female inmates. Marmalade has been sentenced to six years. Under privacy rules set by the prison, WIRED is not able to publish Marmalade’s real name, exact age, or any other information that could be used to identify her. But in a country where prisoners serving life terms can apply to be released after 12 years, six years is a heavy sentence. And like the other 100 inmates who live here, she is not allowed to leave…(More)”.
Initial policy considerations for generative artificial intelligence
OECD Report: “Generative artificial intelligence (AI) creates new content in response to prompts, offering transformative potential across multiple sectors such as education, entertainment, healthcare and scientific research. However, these technologies also pose critical societal and policy challenges that policy makers must confront: potential shifts in labour markets, copyright uncertainties, and risk associated with the perpetuation of societal biases and the potential for misuse in the creation of disinformation and manipulated content. Consequences could extend to the spreading of mis- and disinformation, perpetuation of discrimination, distortion of public discourse and markets, and the incitement of violence. Governments recognise the transformative impact of generative AI and are actively working to address these challenges. This paper aims to inform these policy considerations and support decision makers in addressing them…(More)”.
Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality
Paper by Fabrizio Dell’Acqua et al: “The public release of Large Language Models (LLMs) has sparked tremendous interest in how humans will use Artificial Intelligence (AI) to accomplish a variety of tasks. In our study conducted with Boston Consulting Group, a global management consulting firm, we examine the performance implications of AI on realistic, complex, and knowledge-intensive tasks. The pre-registered experiment involved 758 consultants comprising about 7% of the individual contributor-level consultants at the company. After establishing a performance baseline on a similar task, subjects were randomly assigned to one of three conditions: no AI access, GPT-4 AI access, or GPT-4 AI access with a prompt engineering overview. We suggest that the capabilities of AI create a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI. For each one of a set of 18 realistic consulting tasks within the frontier of AI capabilities, consultants using AI were significantly more productive (they completed 12.2% more tasks on average, and completed task 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group). Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores. For a task selected to be outside the frontier, however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI. Further, our analysis shows the emergence of two distinctive patterns of successful AI use by humans along a spectrum of human-AI integration. One set of consultants acted as “Centaurs,” like the mythical halfhorse/half-human creature, dividing and delegating their solution-creation activities to the AI or to themselves. Another set of consultants acted more like “Cyborgs,” completely integrating their task flow with the AI and continually interacting with the technology…(More)”.
Artificial intelligence in local governments: perceptions of city managers on prospects, constraints and choices
Paper by Tan Yigitcanlar, Duzgun Agdas & Kenan Degirmenci: “Highly sophisticated capabilities of artificial intelligence (AI) have skyrocketed its popularity across many industry sectors globally. The public sector is one of these. Many cities around the world are trying to position themselves as leaders of urban innovation through the development and deployment of AI systems. Likewise, increasing numbers of local government agencies are attempting to utilise AI technologies in their operations to deliver policy and generate efficiencies in highly uncertain and complex urban environments. While the popularity of AI is on the rise in urban policy circles, there is limited understanding and lack of empirical studies on the city manager perceptions concerning urban AI systems. Bridging this gap is the rationale of this study. The methodological approach adopted in this study is twofold. First, the study collects data through semi-structured interviews with city managers from Australia and the US. Then, the study analyses the data using the summative content analysis technique with two data analysis software. The analysis identifies the following themes and generates insights into local government services: AI adoption areas, cautionary areas, challenges, effects, impacts, knowledge basis, plans, preparedness, roadblocks, technologies, deployment timeframes, and usefulness. The study findings inform city managers in their efforts to deploy AI in their local government operations, and offer directions for prospective research…(More)”.
AI and the next great tech shift
Book review by John Thornhill: “When the South Korean political activist Kim Dae-jung was jailed for two years in the early 1980s, he powered his way through some 600 books in his prison cell, such was his thirst for knowledge. One book that left a lasting impression was The Third Wave by the renowned futurist Alvin Toffler, who argued that an imminent information revolution was about to transform the world as profoundly as the preceding agricultural and industrial revolutions.
“Yes, this is it!” Kim reportedly exclaimed. When later elected president, Kim referred to the book many times in his drive to turn South Korea into a technological powerhouse.
Forty-three years after the publication of Toffler’s book, another work of sweeping futurism has appeared with a similar theme and a similar name. Although the stock in trade of futurologists is to highlight the transformational and the unprecedented, it is remarkable how much of their output appears the same.
The chief difference is that The Coming Wave by Mustafa Suleyman focuses more narrowly on the twin revolutions of artificial intelligence and synthetic biology. But the author would surely be delighted if his book were to prove as influential as Toffler’s in prompting politicians to action.
As one of the three co-founders of DeepMind, the London-based AI research company founded in 2010, and now chief executive of the AI start-up Inflection, Suleyman has been at the forefront of the industry for more than a decade. The Coming Wave bristles with breathtaking excitement about the extraordinary possibilities that the revolutions in AI and synthetic biology could bring about.
AI, we are told, could unlock the secrets of the universe, cure diseases and stretch the bounds of imagination. Biotechnology can enable us to engineer life and transform agriculture. “Together they will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen,” he writes.
But what is striking about Suleyman’s heavily promoted book is how the optimism of his will is overwhelmed by the pessimism of his intellect, to borrow a phrase from the Marxist philosopher Antonio Gramsci. For most of history, the challenge of technology has been to unleash its power, Suleyman writes. Now the challenge has flipped.
In the 21st century, the dilemma will be how to contain technology’s power given the capabilities of these new technologies have exploded and the costs of developing them have collapsed. “Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible,” he writes…(More)”.
Unlocking AI’s Potential for Everyone
Article by Diane Coyle: “…But while some policymakers do have deep knowledge about AI, their expertise tends to be narrow, and most other decision-makers simply do not understand the issue well enough to craft sensible policies. Owing to this relatively low knowledge base and the inevitable asymmetry of information between regulators and regulated, policy responses to specific issues are likely to remain inadequate, heavily influenced by lobbying, or highly contested.
So, what is to be done? Perhaps the best option is to pursue more of a principles-based policy. This approach has already gained momentum in the context of issues like misinformation and trolling, where many experts and advocates believe that Big Tech companies should have a general duty of care (meaning a default orientation toward caution and harm reduction).
In some countries, similar principles already apply to news broadcasters, who are obligated to pursue accuracy and maintain impartiality. Although enforcement in these domains can be challenging, the upshot is that we do already have a legal basis for eliciting less socially damaging behavior from technology providers.
When it comes to competition and market dominance, telecoms regulation offers a serviceable model with its principle of interoperability. People with competing service providers can still call each other because telecom companies are all required to adhere to common technical standards and reciprocity agreements. The same is true of ATMs: you may incur a fee, but you can still withdraw cash from a machine at any bank.
In the case of digital platforms, a lack of interoperability has generally been established by design, as a means of locking in users and creating “moats.” This is why policy discussions about improving data access and ensuring access to predictable APIs have failed to make any progress. But there is no technical reason why some interoperability could not be engineered back in. After all, Big Tech companies do not seem to have much trouble integrating the new services that they acquire when they take over competitors.
In the case of LLMs, interoperability probably could not apply at the level of the models themselves, since not even their creators understand their inner workings. However, it can and should apply to interactions between LLMs and other services, such as cloud platforms…(More)”.
City CIOs urged to lay the foundations for generative AI
Article by Sarah Wray: “The London Office of Technology and Innovation (LOTI) has produced a collection of guides to support local authorities in using generative artificial intelligence (genAI) tools such as ChatGPT, Bard, Midjourney and Dall-E.
The resources include a guide for local authority leaders and another aimed at all staff, as well as a guide designed specifically for council Chief Information Officers (CIOs), which was developed with AI software company Faculty.
Sam Nutt, Researcher and Data Ethicist at LOTI, a membership organisation for over 20 boroughs and the Greater London Authority, told Cities Today: “Generative AI won’t solve every problem for local governments, but it could be a catalyst to transform so many processes for how we work.
“On the one hand, personal assistants integrated into programmes like Word, Excel or Powerpoint could massively improve officer productivity. On another level there is a chance to reimagine services and government entirely, thinking about how gen AI models can do so many tasks with data that we couldn’t do before, and allow officers to completely change how they spend their time.
“There are both opportunities and challenges, but the key message on both is that local governments should be ambitious in using this ‘AI moment’ to reimagine and redesign our ways of working to be better at delivering services now and in the future for our residents.”
As an initial step, local governments are advised to provide training and guidelines for staff. Some have begun to implement these steps, including US cities such as Boston, Seattle and San Jose.
Nutt stressed that generative AI policies are useful but not a silver bullet for governance and that they will need to be revisited and updated regularly as technology and regulations evolve…(More)”.