AI and the automation of work


Essay by Benedict Evans: “…We should start by remembering that we’ve been automating work for 200 years. Every time we go through a wave of automation, whole classes of jobs go away, but new classes of jobs get created. There is frictional pain and dislocation in that process, and sometimes the new jobs go to different people in different places, but over time the total number of jobs doesn’t go down, and we have all become more prosperous.

When this is happening to your own generation, it seems natural and intuitive to worry that this time, there aren’t going to be those new jobs. We can see the jobs that are going away, but we can’t predict what the new jobs will be, and often they don’t exist yet. We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?

At this point, any first-year economics student will tell us that this is answered by, amongst other things, the ‘Lump of Labour’ fallacy.

The Lump of Labour fallacy is the misconception that there is a fixed amount of work to be done, and that if some work is taken by a machine then there will be less work for people. But if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs. The efficient gain isn’t confined to the shoe: generally, it ripples outward through the economy and creates new prosperity and new jobs. So, we don’t know what the new jobs will be, but we have a model that says, not just that there always have been new jobs, but why that is inherent in the process. Don’t worry about AI!The most fundamental challenge to this model today, I think, is to say that no, what’s really been happening for the last 200 years of automation is that we’ve been moving up the scale of human capability…(More)”.

Open data for AI: what now?


UNESCO Report: “…A vast amount of data on environment, industry, agriculture health about the world is now being collected through automatic processes, including sensors. Such data may be readily available, but also are potentially too big for humans to handle or analyse effectively, nonetheless they could serve as input to AI systems. AI and data science techniques have demonstrated great capacity to analyse large amounts of data, as currently illustrated by generative AI systems, and help uncover formerly unknown hidden patterns to deliver actionable information in real-time. However, many contemporary AI systems run on proprietary datasets, but data that fulfil the criteria of open data would benefit AI systems further and mitigate potential hazards of the systems such as lacking fairness, accountability, and transparency.

The aim of these guidelines is to apprise Member States of the value of open data, and to outline how data are curated and opened. Member States are encouraged not only to support openness of high-quality data, but also to embrace the use of AI technologies and facilitate capacity building, training and education in this regard, including inclusive open data as well as AI literacy…(More)”.

COVID-19 digital contact tracing worked — heed the lessons for future pandemics


Article by Marcel Salathé: “During the first year of the COVID-19 pandemic, around 50 countries deployed digital contact tracing. When someone tested positive for SARS-CoV-2, anyone who had been in close proximity to that person (usually for 15 minutes or more) would be notified as long as both individuals had installed the contact-tracing app on their devices.

Digital contact tracing received much media attention, and much criticism, in that first year. Many worried that the technology provided a way for governments and technology companies to have even more control over people’s lives than they already do. Others dismissed the apps as a failure, after public-health authorities hit problems in deploying them.

Three years on, the data tell a different story.

The United Kingdom successfully integrated a digital contact-tracing app with other public-health programmes and interventions, and collected data to assess the app’s effectiveness. Several analyses now show that, even with the challenges of introducing a new technology during an emergency, and despite relatively low uptake, the app saved thousands of lives. It has also become clearer that many of the problems encountered elsewhere were not to do with the technology itself, but with integrating a twenty-first-century technology into what are largely twentieth-century public-health infrastructures…(More)”.

Why Citizen-Driven Policy Making Is No Longer A Fringe Idea


Article by Tatjana Buklijas: “Deliberative democracy is a term that would have been met with blank stares in academic and political circles just a few decades ago.

Yet this approach, which examines ways to directly connect citizens with decision-making processes, has now become central to many calls for government reform across the world. 

This surge in interest was firstly driven by the 2008 financial crisis. After the banking crash, there was a crisis of trust in democratic institutions. In Europe and the United States, populist political movements helped drive public feeling to become increasingly anti-establishment. 

The second was the perceived inability of representative democracy to effectively respond to long-term, intergenerational challenges, such as climate change and environmental decline. 

Within the past few years, hundreds of citizens’ assemblies, juries and other forms of ‘minipublics’ have met to learn, deliberate and produce recommendations on topics from housing shortages and covid-19 policies, to climate action.

One of the most recent assemblies in the United Kingdom was the People’s Plan for Nature that produced a vision for the future of nature, and the actions society must take to protect and renew it. 

When it comes to climate action, experts argue that we need to move beyond showpiece national and international goal-setting, and bring decision-making closer to home. 

Scholars say that that local and regional minipublics should be used much more frequently to produce climate policies, as this is where citizens experience the impact of the changing climate and act to make everyday changes.

While some policymakers are critical of deliberative democracy and see these processes as redundant to the existing deliberative bodies, such a national parliaments, others are more supportive. They view them as a way to get a better understanding of both what the public both thinks, and also how they might choose to implement change, after being given the chance to learn and deliberate on key questions.

Research has shown that the cognitive diversity of minipublics ensure a better quality of decision-making, in comparison to the more experienced, but also more homogenous traditional decision-making bodies…(More)”.

Non-traditional data sources in obesity research: a systematic review of their use in the study of obesogenic environments


Paper by Julia Mariel Wirtz Baker, Sonia Alejandra Pou, Camila Niclis, Eugenia Haluszka & Laura Rosana Aballay: “The field of obesity epidemiology has made extensive use of traditional data sources, such as health surveys and reports from official national statistical systems, whose variety of data can be at times limited to explore a wider range of determinants relevant to obesity. Over time, other data sources began to be incorporated into obesity research, such as geospatial data (web mapping platforms, satellite imagery, and other databases embedded in Geographic Information Systems), social network data (such as Twitter, Facebook, Instagram, or other social networks), digital device data and others. The data revolution, facilitated by the massive use of digital devices with hundreds of millions of users and the emergence of the “Internet of Things” (IoT), has generated huge volumes of data from everywhere: customers, social networks and sensors, in addition to all the traditional sources mentioned above. In the research area, it offers fruitful opportunities, contributing in ways that traditionally sourced research data could not.

An international expert panel in obesity and big data pointed out some key factors in the definition of Big Data, stating that “it is always digital, has a large sample size, and a large volume or variety or velocity of variables that require additional computing power, as well as specialist skills in computer programming, database management and data science analytics”. Our interpretation of non-traditional data sources is an approximation to this definition, assuming that they are sources not traditionally used in obesity epidemiology and environmental studies, which can include digital devices, social media and geospatial data within a GIS, the latter mainly based on complex indexes that require advanced data analysis techniques and expertise.

Beyond the still discussed limitations, Big Data can be assumed as a great opportunity to improve the study of obesogenic environments, since it has been announced as a powerful resource that can provide new knowledge about human behaviour and social phenomena. Besides, it can contribute to the formulation and evaluation of policies and the development of interventions for obesity prevention. However, in this field of research, the suitability of these novel data sources is still a subject of considerable discussion, and their use has not been investigated from the obesogenic environment approach…(More)”.

How to Stay Smart in a Smart World


Book by Gerd Gigerenzer: “From dating apps and self-driving cars to facial recognition and the justice system, the increasing presence of AI has been widely championed – but there are limitations and risks too. In this book Gigerenzer shows how humans are often the greatest source of uncertainty and when people are involved, unwavering trust in complex algorithms can become a recipe for disaster. We need, now more than ever, to arm ourselves with knowledge that will help us make better decisions in a digital age.

Filled with practical examples and cutting-edge research, How to Stay Smart in a Smart World examines the growing role of AI at all levels of daily life with refreshing clarity. This book is a life raft in a sea of information and an urgent invitation to actively shape the world in which we want to live…(More)”.

Harvard fraud claims fuel doubts over science of behaviour


Article by Andrew Hill and Andrew Jack: “Claims that fraudulent data was used in papers co-authored by a star Harvard Business School ethics expert have fuelled a growing controversy about the validity of behavioural science, whose findings are routinely taught in business schools and applied within companies.

While the professor has not yet responded to details of the claims, the episode is the latest blow to a field that has risen to prominence over the past 15 years and whose findings in areas such as decision-making and team-building are widely put into practice.

Companies from Coca-Cola to JPMorgan Chase have executives dedicated to behavioural science, while governments around the world have also embraced its findings. But well-known principles in the field such as “nudge theory” are now being called into question.

The Harvard episode “is topic number one in business school circles”, said André Spicer, executive dean of London’s Bayes Business School. “There has been a large-scale replication crisis in psychology — lots of the results can’t be reproduced and some of the underlying data has found to be faked.”…

That cast a shadow over the use of behavioural science by government-linked “nudge units” such as the UK’s Behavioural Insights Team, which was spun off into a company in 2014, and the US Office of Evaluation Sciences.

However, David Halpern, now president of BIT, countered that publication bias is not unique to the field. He said he and his peers use far larger-scale, more representative and robust testing than academic research.

Halpern argued that behavioural research can help to effectively deploy government budgets. “The dirty secret of most governments and organisations is that they spend a lot of money, but have no idea if they are spending in ways that make things better.”

Academics point out that testing others’ results is part of normal scientific practice. The difference with behavioural science is that initial results that have not yet been replicated are often quickly recycled into sensational headlines, popular self-help books and business practice.

“Scientists should be better at pointing out when non-scientists over-exaggerate these things and extrapolate, but they are worried that if they do this they will ruin the positive trend [towards their field],” said Pelle Guldborg Hansen, chief executive of iNudgeyou, a centre for applied behavioural research.

Many consultancies have sprung up to cater to corporate demand for behavioural insights. “What I found was that almost anyone who had read Nudge had a licence to set up as a behavioural scientist,” said Nuala Walsh, who formed the Global Association of Applied Behavioural Scientists in 2020 to try to set some standards…(More)”.

Using data to address equity challenges in local government


Report by the Mastercard Center for Inclusive Growth (CFIG): “…This report describes the Data for Equity cohort learning journey, case studies of how participating cities engaged with and learned from the program, and key takeaways about the potential for data to inform effective and innovative equitable development efforts. Alongside data tools, participants explored the value of qualitative data, the critical link between racial equity and economic inclusion, and how federal funds can advance ongoing equity initiatives. 

Cohort members gained and shared insights throughout their learning journey, including:

  • Resources that provided guidance on how to target funding were helpful to ensuring the viability of cities’ equity and economic development initiatives.
  • Tools and resources that helped practitioners move from diagnosing challenges to identifying solutions were especially valuable.
  • Peer-to-peer learning is an essential resource for leaders and staff working in equity roles, which are often structured differently than other city offices.
  • More data tools that explicitly measure racial equity indicators are needed…(More)”.

Opening industry data: The private sector’s role in addressing societal challenges


Paper by Jennifer Hansen and Yiu-Shing Pang: “This commentary explores the potential of private companies to advance scientific progress and solve social challenges through opening and sharing their data. Open data can accelerate scientific discoveries, foster collaboration, and promote long-term business success. However, concerns regarding data privacy and security can hinder data sharing. Companies have options to mitigate the challenges through developing data governance mechanisms, collaborating with stakeholders, communicating the benefits, and creating incentives for data sharing, among others. Ultimately, open data has immense potential to drive positive social impact and business value, and companies can explore solutions for their specific circumstances and tailor them to their specific needs…(More)”.