Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs


Article by Katya Klinova and Anton Korinek: “…Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.

In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.

Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement…(More)”.

Data can help decarbonize cities – let us explain


Article by Stephen Lorimer and Andrew Collinge: “The University of Birmingham, Alan Turing Institute and Centre for Net Zero are working together, using a tool developed by the Centre, called Faraday, to model a more detailed understanding of energy flows within the district and between it and the neighbouring 8,000 residents. Faraday is a generative AI model trained on one of the UK’s largest smart metre datasets. The model is helping to unlock a more granular view of energy sources and changing energy usage, providing the basis for modelling future energy consumption and local smart grid management.

The partners are investigating the role that trusted data aggregators can play if they can take raw data and desensitize it to a point where it can be shared without eroding consumer privacy or commercial advantage.

Data is central to both initiatives and all cities seeking a renewable energy transition. But there are issues to address, such as common data standards, governance and data competency frameworks (especially across the built environment supply chain)…

Building the governance, standards and culture that delivers confidence in energy data exchange is essential to maximizing the potential of carbon reduction technologies. This framework will ultimately support efficient supply chains and coordinate market activity. There are lessons from the Open Banking initiative, which provided the framework for traditional financial institutions, fintech and regulators to deliver innovation in financial products and services with carefully shared consumer data.

In the energy domain, there are numerous advantageous aspects to data sharing. It helps overcome barriers in the product supply chain, from materials to low-carbon technologies (heat pumps, smart thermostats, electric vehicle chargers etc). Free and Open-Source Software (FOSS) providers can use data to support installers and property owners.

Data interoperability allows third-party products and services to communicate with any end-user device through open or proprietary Internet of Things gateway platforms such as Tuya or IFTTT. A growing bank of post-installation data on the operation of buildings (such as energy efficiency and air quality) will boost confidence in the future quality of retrofits and make for easier decisions on planning approval and grid connections. Finally, data is increasingly considered key in securing the financing and private sector investment crucial to the net zero effort.

None of the above is easy. Organizational and technical complexity can slow progress but cities must be at the forefront of efforts to coordinate the energy data ecosystem and make the case for “data for decarbonization.”…(More)”.

How data-savvy cities can tackle growing ethical considerations


Bloomberg Cities Network: “Technology for collecting, combining, and analyzing data is moving quickly, putting cities in a good position to use data to innovate in how they solve problems. However, it also places a responsibility on them to do so in a manner that does not undermine public trust. 

To help local governments deal with these issues, the London Office of Technology and Innovation, or LOTI, has a set of recommendations for data ethics capabilities in local government. One of those recommendations—for cities that are mature in their work in this area—is to hire a dedicated data ethicist.

LOTI exists to support dozens of local boroughs across London in their collective efforts to tackle big challenges. As part of that mission, LOTI hired Sam Nutt to serve as a data ethicist that local leaders can call on. The move reflected the reality that most local councils don’t have the capacity to have their own data ethicist on staff and it put LOTI in a position to experiment, learn, and share out lessons learned from the approach.

Nutt’s role provides a potential framework other cities looking to hire data ethicists can build on. His position is based on job specifications for data ethicists published by the UK government. He says his work falls into three general areas. First, he helps local councils work through ethical questions surrounding individual data projects. Second, he helps them develop more high-level policies, such as the Borough of Camden’s Data Charter. And third, he provides guidance on how to engage staff, residents, and stakeholders around the implications of using technology, including research on what’s new in the field. 

As an example of the kinds of ethical issues that he consults on, Nutt cites repairs in publicly subsidized housing. Local leaders are interested in using algorithms to help them prioritize use of scarce maintenance resources. But doing so raises questions about what criteria should be used to bump one resident’s needs above another’s. 

“If you prioritize, for example, the likelihood of a resident making a complaint, you may be baking in an existing social inequality, because some communities do not feel as empowered to make complaints as others,” Nutt says. “So it’s thinking through what the ethical considerations might be in terms of choices of data and how you use it, and giving advice to prevent potential biases from creeping in.” 

Nutt acknowledges that most cities are too resource constrained to hire a staff data ethicist. What matters most, he says, is that local governments create mechanisms for ensuring that ethical considerations of their choices with data and technology are considered. “The solution will never be that everyone has to hire a data ethicist,” Nutt says. “The solution is really to build ethics into your default ways of working with data.”

Stefaan Verhulst agrees. “The question for government is: Is ethics a position? A function? Or an institutional responsibility?” says Verhulst, Co-Founder of The GovLab and Director of its Data Program. The key is “to figure out how we institutionalize this in a meaningful way so that we can always check the pulse and get rapid input with regard to the social license for doing certain kinds of things.”

As the data capabilities of local governments grow, it’s also important to empower all individuals working in government to understand ethical considerations within the work they’re doing, and to have clear guidelines and codes of conduct they can follow. LOTI’s data ethics recommendations note that hiring a data ethicist should not be an organization’s first step, in part because “it risks delegating ethics to a single individual when it should be in the domain of anyone using or managing data.”

Training staff is a big part of the equation. “It’s about making the culture of government sensitive to these issues,” Verhulst says, so “that people are aware.”..(More)”.

Innovation Can Reboot American Democracy


Blog by Suzette Brooks Masters: “A thriving multiracial pluralist democracy is an aspiration that many people share for America. Far from being inevitable, the path to such a future is uncertain.

To stretch how we think about American democracy’s future iterations and begin to imagine the contours of the new, we need to learn from what’s emergent. So I’m going to take you on a whirlwind tour of some experiments taking place here and abroad that are the bright spots illuminating possible futures ahead.

My comments are informed by a research report I wrote last year called Imagining Better Futures for American Democracy. I interviewed dozens of visionaries in a range of fields and with diverse perspectives about the future of our democracy and the role positive visioning and futures thinking could play in reinvigorating it.

As I discuss these bright spots, I want to emphasize that what is most certain now is the accelerating and destabilizing change we are experiencing. It’s critical therefore to develop systems, institutions, norms and mindsets to navigate that change boldly and responsibly, not pretend that tomorrow will continue to look like today.

Yet when paradigms shift, as they inevitably do and I would argue are right now, that’s a messy and confusing time that can cause lots of anxiety and disorientation. During these critical periods of transition, we must set aside or ‘hospice” some assumptions, mindsets, practices, and institutions, while midwifing, or welcoming in, new ones.

This is difficult to do in the best of times but can be especially so when, collectively, we suffer from a lack of imagination and vision about what American democracy could and should become.

It’s not all our fault — inertia, fear, distrust, cynicism, diagnosis paralysis, polarization, exceptionalism, parochialism, and a pervasive, dystopian media environment are dragging us down. They create very strong headwinds weakening both our appetite and our ability to dream bigger and imagine better futures ahead.

However, focusing on and amplifying promising innovations can change that dysfunctional dynamic by inspiring us and providing blueprints to act upon when the time is right.

Below I discuss two main types of innovations in the political sphere: election-related structural reforms and governance reforms, including new forms of civic engagement and government decision-making…(More)”.

A Comparative Perspective on AI Regulation


Blog by Itsiq Benizri, Arianna Evers, Shannon Togawa Mercer, Ali A. Jessani: “The question isn’t whether AI will be regulated, but how. Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches: The EU has put forth a broad and prescriptive proposal in the AI Act, which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The U.K., in turn, has committed to abstaining from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay. The United States, meanwhile, has pushed for national AI standards through the executive branch but also has adopted some AI-specific rules at the state level (both through comprehensive privacy legislation and for specific AI-related use cases). Between these three jurisdictions, there are multiple approaches to AI regulation that can help strike the balance between developing AI technology and ensuring that there is a framework in place to account for potential harms to consumers and others. Given the explosive popularity and development of AI in recent months, there is likely to be a strong push by companies, entrepreneurs, and tech leaders in the near future for additional clarity on AI. Regulators will have to answer these calls. Despite not knowing what AI regulation in the United States will look like in one year (let alone five), savvy AI users and developers should examine these early regulatory approaches to try and chart a thoughtful approach to AI…(More)”

Patients are Pooling Data to Make Diabetes Research More Representative


Blog by Tracy Kariuki: “Saira Khan-Gallo knows how overwhelming managing and living healthily with diabetes can be. As a person living with type 1 diabetes for over two decades, she understands how tracking glucose levels, blood pressure, blood cholesterol, insulin intake, and, and, and…could all feel like drowning in an infinite pool of numbers.

But that doesn’t need to be the case. This is why Tidepool, a non-profit tech organization composed of caregivers and other people living with diabetes such as Gallo, is transforming diabetes data management. Its data visualization platform enables users to make sense of the data and derive insights into their health status….

Through its Big Data Donation Project, Tidepool has been supporting the advancement of diabetes research by sharing anonymized data from people living with diabetes with researchers.

To date, more than 40,000 individuals have chosen to donate data uploaded from their diabetes devices like blood glucose meters, insulin pumps and continuous glucose monitors, which is then shared by Tidepool with students, academics, researchers, and industry partners — Making the database larger than many clinical trials. For instance, Oregon Health and Science University have used datasets collected from Tidepool to build an algorithm that predicts hypoglycemia, which is low blood sugar, with the goal of advancing closed loop therapy for diabetes management…(More)”.

A new way to look at data privacy


Article by Adam Zewe: “Imagine that a team of scientists has developed a machine-learning model that can predict whether a patient has cancer from lung scan images. They want to share this model with hospitals around the world so clinicians can start using it in diagnosis.

But there’s a problem. To teach their model how to predict cancer, they showed it millions of real lung scan images, a process called training. Those sensitive data, which are now encoded into the inner workings of the model, could potentially be extracted by a malicious agent. The scientists can prevent this by adding noise, or more generic randomness, to the model that makes it harder for an adversary to guess the original data. However, perturbation reduces a model’s accuracy, so the less noise one can add, the better.

MIT researchers have developed a technique that enables the user to potentially add the smallest amount of noise possible, while still ensuring the sensitive data are protected.

The researchers created a new privacy metric, which they call Probably Approximately Correct (PAC) Privacy, and built a framework based on this metric that can automatically determine the minimal amount of noise that needs to be added. Moreover, this framework does not need knowledge of the inner workings of a model or its training process, which makes it easier to use for different types of models and applications.

In several cases, the researchers show that the amount of noise required to protect sensitive data from adversaries is far less with PAC Privacy than with other approaches. This could help engineers create machine-learning models that provably hide training data, while maintaining accuracy in real-world settings…

A fundamental question in data privacy is: How much sensitive data could an adversary recover from a machine-learning model with noise added to it?

Differential Privacy, one popular privacy definition, says privacy is achieved if an adversary who observes the released model cannot infer whether an arbitrary individual’s data is used for the training processing. But provably preventing an adversary from distinguishing data usage often requires large amounts of noise to obscure it. This noise reduces the model’s accuracy.

PAC Privacy looks at the problem a bit differently. It characterizes how hard it would be for an adversary to reconstruct any part of randomly sampled or generated sensitive data after noise has been added, rather than only focusing on the distinguishability problem…(More)”

AI and the automation of work


Essay by Benedict Evans: “…We should start by remembering that we’ve been automating work for 200 years. Every time we go through a wave of automation, whole classes of jobs go away, but new classes of jobs get created. There is frictional pain and dislocation in that process, and sometimes the new jobs go to different people in different places, but over time the total number of jobs doesn’t go down, and we have all become more prosperous.

When this is happening to your own generation, it seems natural and intuitive to worry that this time, there aren’t going to be those new jobs. We can see the jobs that are going away, but we can’t predict what the new jobs will be, and often they don’t exist yet. We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?

At this point, any first-year economics student will tell us that this is answered by, amongst other things, the ‘Lump of Labour’ fallacy.

The Lump of Labour fallacy is the misconception that there is a fixed amount of work to be done, and that if some work is taken by a machine then there will be less work for people. But if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs. The efficient gain isn’t confined to the shoe: generally, it ripples outward through the economy and creates new prosperity and new jobs. So, we don’t know what the new jobs will be, but we have a model that says, not just that there always have been new jobs, but why that is inherent in the process. Don’t worry about AI!The most fundamental challenge to this model today, I think, is to say that no, what’s really been happening for the last 200 years of automation is that we’ve been moving up the scale of human capability…(More)”.

Why Citizen-Driven Policy Making Is No Longer A Fringe Idea


Article by Tatjana Buklijas: “Deliberative democracy is a term that would have been met with blank stares in academic and political circles just a few decades ago.

Yet this approach, which examines ways to directly connect citizens with decision-making processes, has now become central to many calls for government reform across the world. 

This surge in interest was firstly driven by the 2008 financial crisis. After the banking crash, there was a crisis of trust in democratic institutions. In Europe and the United States, populist political movements helped drive public feeling to become increasingly anti-establishment. 

The second was the perceived inability of representative democracy to effectively respond to long-term, intergenerational challenges, such as climate change and environmental decline. 

Within the past few years, hundreds of citizens’ assemblies, juries and other forms of ‘minipublics’ have met to learn, deliberate and produce recommendations on topics from housing shortages and covid-19 policies, to climate action.

One of the most recent assemblies in the United Kingdom was the People’s Plan for Nature that produced a vision for the future of nature, and the actions society must take to protect and renew it. 

When it comes to climate action, experts argue that we need to move beyond showpiece national and international goal-setting, and bring decision-making closer to home. 

Scholars say that that local and regional minipublics should be used much more frequently to produce climate policies, as this is where citizens experience the impact of the changing climate and act to make everyday changes.

While some policymakers are critical of deliberative democracy and see these processes as redundant to the existing deliberative bodies, such a national parliaments, others are more supportive. They view them as a way to get a better understanding of both what the public both thinks, and also how they might choose to implement change, after being given the chance to learn and deliberate on key questions.

Research has shown that the cognitive diversity of minipublics ensure a better quality of decision-making, in comparison to the more experienced, but also more homogenous traditional decision-making bodies…(More)”.

Destination? Care Blocks!


Blog by Natalia González Alarcón, Hannah Chafetz, Diana Rodríguez Franco, Uma Kalkar, Bapu Vaitla, & Stefaan G. Verhulst: “Time poverty” caused by unpaid care work overload, such as washing, cleaning, cooking, and caring for their care-receivers is a structural consequence of gender inequality. In the City of Bogotá, 1.2 million women — 30% of their total women’s population — carry out unpaid care work full-time. If such work was compensated, it would represent 13% of Bogotá’s GDP and 20% of the country’s GDP. Moreover, the care burden falls disproportionately on women’s shoulder and prevents them from furthering their education, achieving financial autonomy, participating in their community, and tending to their personal wellbeing.

To address the care burden and its spillover consequences on women’s economic autonomy, well-being and political participation, in October 2020, Bogotá Mayor Claudia López launched the Care Block Initiative. Care Blocks, or Manzanas del cuidado, are centralized areas for women’s economic, social, medical, educational, and personal well-being and advancement. They provide services simultaneously for caregivers and care-receivers.

As the program expands from 19 existing Care Blocks to 45 Care Blocks by the end of 2035, decision-makers face another issue: mobility is a critical and often limiting factor for women when accessing Care Blocks in Bogotá.

On May 19th, 2023, The GovLabData2X, and the Secretariat for Women’s Affairs, in the City Government of Bogotá co-hosted a studio that aimed to scope a purposeful and gender-conscious data collaborative that addresses mobility-related issues affecting the access of Care Blocks in Bogotá. Convening experts across the gender, mobility, policy, and data ecosystems, the studio focused on (1) prioritizing the critical questions as it relates to mobility and access to Care Blocks and (2) identifying the data sources and actors that could be tapped into to set up a new data collaborative…(More)”.