We need a much more sophisticated debate about AI

Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

Advancing Technology for Democracy

The White House: “The first wave of the digital revolution promised that new technologies would support democracy and human rights. The second saw an authoritarian counterrevolution. Now, the United States and other democracies are working together to ensure that the third wave of the digital revolution leads to a technological ecosystem characterized by resilience, integrity, openness, trust and security, and that reinforces democratic principles and human rights.

Together, we are organizing and mobilizing to ensure that technologies work for, not against, democratic principles, institutions, and societies.  In so doing, we will continue to engage the private sector, including by holding technology platforms accountable when they do not take action to counter the harms they cause, and by encouraging them to live up to democratic principles and shared values…

Key deliverables announced or highlighted at the second Summit for Democracy include:

  • National Strategy to Advance Privacy-Preserving Data Sharing and Analytics. OSTP released a National Strategy to Advance Privacy-Preserving Data Sharing and Analytics, a roadmap for harnessing privacy-enhancing technologies, coupled with strong governance, to enable data sharing and analytics in a way that benefits individuals and society, while mitigating privacy risks and harms and upholding democratic principles.  
  • National Objectives for Digital Assets Research and Development. OSTP also released a set of National Objectives for Digital Assets Research and Development, whichoutline its priorities for the responsible research and development (R&D) of digital assets. These objectives will help developers of digital assets better reinforce democratic principles and protect consumers by default.
  • Launch of Trustworthy and Responsible AI Resource Center for Risk Management. NIST announced a new Resource Center, which is designed as a one-stop-shop website for foundational content, technical documents, and toolkits to enable responsible use of AI. Government, industry, and academic stakeholders can access resources such as a repository for AI standards, measurement methods and metrics, and data sets. The website is designed to facilitate the implementation and international alignment with the AI Risk Management Framework. The Framework articulates the key building blocks of trustworthy AI and offers guidance for addressing them.
  • International Grand Challenges on Democracy-Affirming Technologies. Announced at the first Summit, the United States and the United Kingdom carried out their joint Privacy Enhancing Technology Prize Challenges. IE University, in partnership with the U.S. Department of State, hosted the Tech4Democracy Global Entrepreneurship Challenge. The winners, selected from around the world, were featured at the second Summit….(More)”.

Law, AI, and Human Rights

Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

China’s fake science industry: how ‘paper mills’ threaten progress

Article by Eleanor Olcott, Clive Cookson and Alan Smith at the Financial Times: “…Over the past two decades, Chinese researchers have become some of the world’s most prolific publishers of scientific papers. The Institute for Scientific Information, a US-based research analysis organisation, calculated that China produced 3.7mn papers in 2021 — 23 per cent of global output — and just behind the 4.4mn total from the US.

At the same time, China has been climbing the ranks of the number of times a paper is cited by other authors, a metric used to judge output quality. Last year, China surpassed the US for the first time in the number of most cited papers, according to Japan’s National Institute of Science and Technology Policy, although that figure was flattered by multiple references to Chinese research that first sequenced the Covid-19 virus genome.

The soaring output has sparked concern in western capitals. Chinese advances in high-profile fields such as quantum technology, genomics and space science, as well as Beijing’s surprise hypersonic missile test two years ago, have amplified the view that China is marching towards its goal of achieving global hegemony in science and technology.

That concern is a part of a wider breakdown of trust in some quarters between western institutions and Chinese ones, with some universities introducing background checks on Chinese academics amid fears of intellectual property theft.

But experts say that China’s impressive output masks systemic inefficiencies and an underbelly of low-quality and fraudulent research. Academics complain about the crushing pressure to publish to gain prized positions at research universities…(More)”.

What We Gain from More Behavioral Science in the Global South

Article by Pauline Kabitsis and Lydia Trupe: “In recent years, the field has been critiqued for applying behavioral science at the margins, settling for small but statistically significant effect sizes. Critics have argued that by focusing our efforts on nudging individuals to increase their 401(k) contributions or to reduce their so-called carbon footprint, we have ignored the systemic drivers of important challenges, such as fundamental flaws in the financial system and corporate responsibility for climate change. As Michael Hallsworth points out, however, the field may not be willfully ignoring these deeper challenges, but rather investing in areas of change that are likely easier to move, measure, and secure funding.

It’s been our experience working in the Global South that nudge-based solutions can provide short-term gains within current systems, but for lasting impact a focus beyond individual-level change is required. This is because the challenges in the Global South typically navigate fundamental problems, like enabling women’s reproductive choice, combatting intimate partner violence and improving food security among the world’s most vulnerable populations.

Our work at Common Thread focuses on improving behaviors related to health, like encouraging those persistently left behind to get vaccinated, and enabling Ukrainian refugees in Poland to access health and welfare services. We use a behavioral model that considers not just the individual biases that impact people’s behaviors, but the structural, social, interpersonal, and even historical context that triggers these biases and inhibits health seeking behaviors…(More)”.

The wisdom of crowds for improved disaster resilience: a near-real-time analysis of crowdsourced social media data on the 2021 flood in Germany

Paper by Mahsa Moghadas, Alexander Fekete, Abbas Rajabifard & Theo Kötter: “Transformative disaster resilience in times of climate change underscores the importance of reflexive governance, facilitation of socio-technical advancement, co-creation of knowledge, and innovative and bottom-up approaches. However, implementing these capacity-building processes by relying on census-based datasets and nomothetic (or top-down) approaches remains challenging for many jurisdictions. Web 2.0 knowledge sharing via online social networks, whereas, provides a unique opportunity and valuable data sources to complement existing approaches, understand dynamics within large communities of individuals, and incorporate collective intelligence into disaster resilience studies. Using Twitter data (passive crowdsourcing) and an online survey, this study draws on the wisdom of crowds and public judgment in near-real-time disaster phases when the flood disaster hit Germany in July 2021. Latent Dirichlet Allocation, an unsupervised machine learning technique for Topic Modeling, was applied to the corpora of two data sources to identify topics associated with different disaster phases. In addition to semantic (textual) analysis, spatiotemporal patterns of online disaster communication were analyzed to determine the contribution patterns associated with the affected areas. Finally, the extracted topics discussed online were compiled into five themes related to disaster resilience capacities (preventive, anticipative, absorptive, adaptive, and transformative). The near-real-time collective sensing approach reflected optimized diversity and a spectrum of people’s experiences and knowledge regarding flooding disasters and highlighted communities’ sociocultural characteristics. This bottom-up approach could be an innovative alternative to traditional participatory techniques of organizing meetings and workshops for situational analysis and timely unfolding of such events at a fraction of the cost to inform disaster resilience initiatives…(More)”.

The pandemic veneer: COVID-19 research as a mobilisation of collective intelligence by the global research community

Paper by Daniel W Hook and James R Wilsdon: “The global research community responded with speed and at scale to the emergence of COVID-19, with around 4.6% of all research outputs in 2020 related to the pandemic. That share almost doubled through 2021, to reach 8.6% of research outputs. This reflects a dramatic mobilisation of global collective intelligence in the face of a crisis. It also raises fundamental questions about the funding, organisation and operation of research. In this Perspective article, we present data that suggests that COVID-19 research reflects the characteristics of the underlying networks from which it emerged, and on which it built. The infrastructures on which COVID-19 research has relied – including highly skilled, flexible research capacity and collaborative networks – predated the pandemic, and are the product of sustained, long-term investment. As such, we argue that COVID-19 research should not be viewed as a distinct field, or one-off response to a specific crisis, but as a ‘pandemic veneer’ layered on top of longstanding interdisciplinary networks, capabilities and structures. These infrastructures of collective intelligence need to be better understood, valued and sustained as crucial elements of future pandemic or crisis response…(More)”.

Mini Data Centers heat local swimming pools for free

Springwise: “It is now well-understood that data centres consume vast amounts of energy. This is because the banks of servers in the data centres require a lot of cooling, which, in turn, uses a lot of energy. But one data centre has found a use for all the heat that it generates, a use that could also help public facilities such as swimming pools save money on their energy costs.

Deep Green, which runs data centres, has developed small edge data centres that can be installed locally and divert some of their excess heat to warm leisure centres and public swimming pools. The system, dubbed a “digital boiler”, involves immersing central processing unit (CPU) servers in special cooling tubs, which use oil to remove heat from the servers. This oil is then passed through a heat exchanger, which removes the heat and uses it to warm buildings or swimming pools.

Photo source Deep Green

The company says the heat donation from one of its digital boilers will cut a public swimming pool’s gas requirements by around 70 per cent, saving leisure centres thousands of pounds every year while also drastically reducing carbon emissions. Deep Green pays for the electricity it uses and donates the heat for free. This is a huge benefit, as Britain’s public swimming pools are facing massive increases in heating bills, which is causing many to close or restrict their hours…(More)”.

Leveraging alternative data to provide loans to the unbanked

Article by Keely Khoury: “Financial inclusion is integral to the achievement of seven of the 17 global SDGs, and the World Bank says in its 2021 report that between 2011 and 2021, “Great strides have been made toward financial inclusion.” However, despite a significant increase in the number of people accessing bank accounts, around 24 per cent of the global population remain unbanked.  

Particularly for minority groups such as immigrants, the ability to access formal financial services is made exponentially more difficult by their lack of permanent address, loss of employment, and gaps in tax records. For small business owners – many of whom provide an essential community service – a lack of formal accounting records, along with any previous time spent unbanked as individuals, contributes to a dearth of information traditionally used to evaluate risk for loans.  

To tackle this issue, US startup Uplinq provides lenders with a ‘credit-assessment-as-a-service’ solution that takes into account the entire business ecosystem, and therefore billions of data points that would not traditionally be examined by underwriters considering a traditional loan application. From supplier references and store traffic to community involvement and property improvements, Uplinq provides a holistic and accurate assessment of the “opportunities, challenges, and interests of each prospect” within “known confidence ranges.” By working with independently audited and fully regulatory-compliant data sets, Uplinq’s services are available worldwide.  

Other innovations that Springwise has spotted that are helping unbanked communities include a Spanish language-first bank, and a free digital learning platform to help underserved communities understand how to better manage their finances…(More)”.

The Synchronized Society: Time and Control From Broadcasting to the Internet

Book by Randall Patnode: “…traces the history of the synchronous broadcast experience of the twentieth century and the transition to the asynchronous media that dominate today. Broadcasting grew out of the latent desire by nineteenth-century industrialists, political thinkers, and social reformers to tame an unruly society by controlling how people used their time. The idea manifested itself in the form of the broadcast schedule, a managed flow of information and entertainment that required audiences to be in a particular place – usually the home – at a particular time and helped to create “water cooler” moments, as audiences reflected on their shared media texts. Audiences began disconnecting from the broadcast schedule at the end of the twentieth century, but promoters of social media and television services still kept audiences under control, replacing the schedule with surveillance of media use. Author Randall Patnode offers compelling new insights into the intermingled roles of broadcasting and industrial/post-industrial work and how Americans spend their time…(More)”.