Designing Research For Impact


Blog by Duncan Green: “The vast majority of proposals seem to conflate impact with research dissemination (a heroic leap of faith – changing the world one seminar at a time), or to outsource impact to partners such as NGOs and thinktanks.

Of the two, the latter looks more promising, but then the funder should ask to see both evidence of genuine buy-in from the partners, and appropriate budget for the work. Bringing in a couple of NGOs as ‘bid candy’ with little money attached is unlikely to produce much impact.

There is plenty written on how to genuinely design research for impact, e.g. this chapter from a number of Oxfam colleagues on its experience, or How to Engage Policy Makers with your Research (an excellent book I reviewed recently and on the LSE Review of Books). In brief, proposals should:

  • Identify the kind(s) of impacts being sought: policy change, attitudinal shifts (public or among decision makers), implementation of existing laws and policies etc.
  • Provide a stakeholder mapping of the positions of key players around those impacts – supporters, waverers and opponents.
  • Explain how the research plans to target some/all of these different individuals/groups, including during the research process itself (not just ‘who do we send the papers to once they’re published?’).
  • Which messengers/intermediaries will be recruited to convey the research to the relevant targets (researchers themselves are not always the best-placed to persuade them)
  • Potential ‘critical junctures’ such as crises or changes of political leadership that could open windows of opportunity for uptake, and how the research team is set up to spot and respond to them.
  • Anticipated attacks/backlash against research on sensitive issues and how the researchers plan to respond
  • Plans for review and adaptation of the influencing strategy

I am not arguing for proposals to indicate specific impact outcomes – most systems are way too complex for that. But, an intentional plan based on asking questions on the points above would probably help researchers improve their chances of impact.

Based on the conversations I’ve been having, I also have some thoughts on what is blocking progress.

Impact is still too often seen as an annoying hoop to jump through at the funding stage (and then largely forgotten, at least until reporting at the end of the project). The incentives are largely personal/moral (‘I want to make a difference’), whereas the weight of professional incentives are around accumulating academic publications and earning the approval of peers (hence the focus on seminars).

incentives are largely personal/moral (‘I want to make a difference’), whereas the weight of professional incentives are around accumulating academic publications

The timeline of advocacy, with its focus on ‘dancing with the system’, jumping on unexpected windows of opportunity etc, does not mesh with the relentless but slow pressure to write and publish. An academic is likely to pay a price if they drop their current research plans to rehash prior work to take advantage of a brief policy ‘window of opportunity’.

There is still some residual snobbery, at least in some disciplines. You still hear terms like ‘media don’, which is not meant as a compliment. For instance, my friend Ha-Joon Chang is now an economics professor at SOAS, but what on earth was Cambridge University thinking not making a global public intellectual and brilliant mind into a prof, while he was there?

True, there is also some more justified concern that designing research for impact can damage the research’s objectivity/credibility – hence the desire to pull in NGOs and thinktanks as intermediaries. But, this conversation still feels messy and unresolved, at least in the UK…(More)”.

Valuing Data: The Role of Satellite Data in Halting the Transmission of Polio in Nigeria


Article by Mariel Borowitz, Janet Zhou, Krystal Azelton & Isabelle-Yara Nassar: “There are more than 1,000 satellites in orbit right now collecting data about what’s happening on the Earth. These include government and commercial satellites that can improve our understanding of climate change; monitor droughts, floods, and forest fires; examine global agricultural output; identify productive locations for fishing or mining; and many other purposes. We know the data provided by these satellites is important, yet it is very difficult to determine the exact value that each of these systems provides. However, with only a vague sense of “value,” it is hard for policymakers to ensure they are making the right investments in Earth observing satellites.

NASA’s Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), carried out in collaboration with Resources for the Future, aimed to address this by analyzing specific use cases of satellite data to determine their monetary value. VALUABLES proposed a “value of information” approach focusing on cases in which satellite data informed a specific decision. Researchers could then compare the outcome of that decision with what would have occurredif no satellite data had been available. Our project, which was funded under the VALUABLES program, examined how satellite data contributed to efforts to halt the transmission of Polio in Nigeria…(More)”

Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs


Article by Katya Klinova and Anton Korinek: “…Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.

In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.

Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement…(More)”.

Data can help decarbonize cities – let us explain


Article by Stephen Lorimer and Andrew Collinge: “The University of Birmingham, Alan Turing Institute and Centre for Net Zero are working together, using a tool developed by the Centre, called Faraday, to model a more detailed understanding of energy flows within the district and between it and the neighbouring 8,000 residents. Faraday is a generative AI model trained on one of the UK’s largest smart metre datasets. The model is helping to unlock a more granular view of energy sources and changing energy usage, providing the basis for modelling future energy consumption and local smart grid management.

The partners are investigating the role that trusted data aggregators can play if they can take raw data and desensitize it to a point where it can be shared without eroding consumer privacy or commercial advantage.

Data is central to both initiatives and all cities seeking a renewable energy transition. But there are issues to address, such as common data standards, governance and data competency frameworks (especially across the built environment supply chain)…

Building the governance, standards and culture that delivers confidence in energy data exchange is essential to maximizing the potential of carbon reduction technologies. This framework will ultimately support efficient supply chains and coordinate market activity. There are lessons from the Open Banking initiative, which provided the framework for traditional financial institutions, fintech and regulators to deliver innovation in financial products and services with carefully shared consumer data.

In the energy domain, there are numerous advantageous aspects to data sharing. It helps overcome barriers in the product supply chain, from materials to low-carbon technologies (heat pumps, smart thermostats, electric vehicle chargers etc). Free and Open-Source Software (FOSS) providers can use data to support installers and property owners.

Data interoperability allows third-party products and services to communicate with any end-user device through open or proprietary Internet of Things gateway platforms such as Tuya or IFTTT. A growing bank of post-installation data on the operation of buildings (such as energy efficiency and air quality) will boost confidence in the future quality of retrofits and make for easier decisions on planning approval and grid connections. Finally, data is increasingly considered key in securing the financing and private sector investment crucial to the net zero effort.

None of the above is easy. Organizational and technical complexity can slow progress but cities must be at the forefront of efforts to coordinate the energy data ecosystem and make the case for “data for decarbonization.”…(More)”.

How data-savvy cities can tackle growing ethical considerations


Bloomberg Cities Network: “Technology for collecting, combining, and analyzing data is moving quickly, putting cities in a good position to use data to innovate in how they solve problems. However, it also places a responsibility on them to do so in a manner that does not undermine public trust. 

To help local governments deal with these issues, the London Office of Technology and Innovation, or LOTI, has a set of recommendations for data ethics capabilities in local government. One of those recommendations—for cities that are mature in their work in this area—is to hire a dedicated data ethicist.

LOTI exists to support dozens of local boroughs across London in their collective efforts to tackle big challenges. As part of that mission, LOTI hired Sam Nutt to serve as a data ethicist that local leaders can call on. The move reflected the reality that most local councils don’t have the capacity to have their own data ethicist on staff and it put LOTI in a position to experiment, learn, and share out lessons learned from the approach.

Nutt’s role provides a potential framework other cities looking to hire data ethicists can build on. His position is based on job specifications for data ethicists published by the UK government. He says his work falls into three general areas. First, he helps local councils work through ethical questions surrounding individual data projects. Second, he helps them develop more high-level policies, such as the Borough of Camden’s Data Charter. And third, he provides guidance on how to engage staff, residents, and stakeholders around the implications of using technology, including research on what’s new in the field. 

As an example of the kinds of ethical issues that he consults on, Nutt cites repairs in publicly subsidized housing. Local leaders are interested in using algorithms to help them prioritize use of scarce maintenance resources. But doing so raises questions about what criteria should be used to bump one resident’s needs above another’s. 

“If you prioritize, for example, the likelihood of a resident making a complaint, you may be baking in an existing social inequality, because some communities do not feel as empowered to make complaints as others,” Nutt says. “So it’s thinking through what the ethical considerations might be in terms of choices of data and how you use it, and giving advice to prevent potential biases from creeping in.” 

Nutt acknowledges that most cities are too resource constrained to hire a staff data ethicist. What matters most, he says, is that local governments create mechanisms for ensuring that ethical considerations of their choices with data and technology are considered. “The solution will never be that everyone has to hire a data ethicist,” Nutt says. “The solution is really to build ethics into your default ways of working with data.”

Stefaan Verhulst agrees. “The question for government is: Is ethics a position? A function? Or an institutional responsibility?” says Verhulst, Co-Founder of The GovLab and Director of its Data Program. The key is “to figure out how we institutionalize this in a meaningful way so that we can always check the pulse and get rapid input with regard to the social license for doing certain kinds of things.”

As the data capabilities of local governments grow, it’s also important to empower all individuals working in government to understand ethical considerations within the work they’re doing, and to have clear guidelines and codes of conduct they can follow. LOTI’s data ethics recommendations note that hiring a data ethicist should not be an organization’s first step, in part because “it risks delegating ethics to a single individual when it should be in the domain of anyone using or managing data.”

Training staff is a big part of the equation. “It’s about making the culture of government sensitive to these issues,” Verhulst says, so “that people are aware.”..(More)”.

Innovation Can Reboot American Democracy


Blog by Suzette Brooks Masters: “A thriving multiracial pluralist democracy is an aspiration that many people share for America. Far from being inevitable, the path to such a future is uncertain.

To stretch how we think about American democracy’s future iterations and begin to imagine the contours of the new, we need to learn from what’s emergent. So I’m going to take you on a whirlwind tour of some experiments taking place here and abroad that are the bright spots illuminating possible futures ahead.

My comments are informed by a research report I wrote last year called Imagining Better Futures for American Democracy. I interviewed dozens of visionaries in a range of fields and with diverse perspectives about the future of our democracy and the role positive visioning and futures thinking could play in reinvigorating it.

As I discuss these bright spots, I want to emphasize that what is most certain now is the accelerating and destabilizing change we are experiencing. It’s critical therefore to develop systems, institutions, norms and mindsets to navigate that change boldly and responsibly, not pretend that tomorrow will continue to look like today.

Yet when paradigms shift, as they inevitably do and I would argue are right now, that’s a messy and confusing time that can cause lots of anxiety and disorientation. During these critical periods of transition, we must set aside or ‘hospice” some assumptions, mindsets, practices, and institutions, while midwifing, or welcoming in, new ones.

This is difficult to do in the best of times but can be especially so when, collectively, we suffer from a lack of imagination and vision about what American democracy could and should become.

It’s not all our fault — inertia, fear, distrust, cynicism, diagnosis paralysis, polarization, exceptionalism, parochialism, and a pervasive, dystopian media environment are dragging us down. They create very strong headwinds weakening both our appetite and our ability to dream bigger and imagine better futures ahead.

However, focusing on and amplifying promising innovations can change that dysfunctional dynamic by inspiring us and providing blueprints to act upon when the time is right.

Below I discuss two main types of innovations in the political sphere: election-related structural reforms and governance reforms, including new forms of civic engagement and government decision-making…(More)”.

A Comparative Perspective on AI Regulation


Blog by Itsiq Benizri, Arianna Evers, Shannon Togawa Mercer, Ali A. Jessani: “The question isn’t whether AI will be regulated, but how. Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches: The EU has put forth a broad and prescriptive proposal in the AI Act, which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The U.K., in turn, has committed to abstaining from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay. The United States, meanwhile, has pushed for national AI standards through the executive branch but also has adopted some AI-specific rules at the state level (both through comprehensive privacy legislation and for specific AI-related use cases). Between these three jurisdictions, there are multiple approaches to AI regulation that can help strike the balance between developing AI technology and ensuring that there is a framework in place to account for potential harms to consumers and others. Given the explosive popularity and development of AI in recent months, there is likely to be a strong push by companies, entrepreneurs, and tech leaders in the near future for additional clarity on AI. Regulators will have to answer these calls. Despite not knowing what AI regulation in the United States will look like in one year (let alone five), savvy AI users and developers should examine these early regulatory approaches to try and chart a thoughtful approach to AI…(More)”

Patients are Pooling Data to Make Diabetes Research More Representative


Blog by Tracy Kariuki: “Saira Khan-Gallo knows how overwhelming managing and living healthily with diabetes can be. As a person living with type 1 diabetes for over two decades, she understands how tracking glucose levels, blood pressure, blood cholesterol, insulin intake, and, and, and…could all feel like drowning in an infinite pool of numbers.

But that doesn’t need to be the case. This is why Tidepool, a non-profit tech organization composed of caregivers and other people living with diabetes such as Gallo, is transforming diabetes data management. Its data visualization platform enables users to make sense of the data and derive insights into their health status….

Through its Big Data Donation Project, Tidepool has been supporting the advancement of diabetes research by sharing anonymized data from people living with diabetes with researchers.

To date, more than 40,000 individuals have chosen to donate data uploaded from their diabetes devices like blood glucose meters, insulin pumps and continuous glucose monitors, which is then shared by Tidepool with students, academics, researchers, and industry partners — Making the database larger than many clinical trials. For instance, Oregon Health and Science University have used datasets collected from Tidepool to build an algorithm that predicts hypoglycemia, which is low blood sugar, with the goal of advancing closed loop therapy for diabetes management…(More)”.

A new way to look at data privacy


Article by Adam Zewe: “Imagine that a team of scientists has developed a machine-learning model that can predict whether a patient has cancer from lung scan images. They want to share this model with hospitals around the world so clinicians can start using it in diagnosis.

But there’s a problem. To teach their model how to predict cancer, they showed it millions of real lung scan images, a process called training. Those sensitive data, which are now encoded into the inner workings of the model, could potentially be extracted by a malicious agent. The scientists can prevent this by adding noise, or more generic randomness, to the model that makes it harder for an adversary to guess the original data. However, perturbation reduces a model’s accuracy, so the less noise one can add, the better.

MIT researchers have developed a technique that enables the user to potentially add the smallest amount of noise possible, while still ensuring the sensitive data are protected.

The researchers created a new privacy metric, which they call Probably Approximately Correct (PAC) Privacy, and built a framework based on this metric that can automatically determine the minimal amount of noise that needs to be added. Moreover, this framework does not need knowledge of the inner workings of a model or its training process, which makes it easier to use for different types of models and applications.

In several cases, the researchers show that the amount of noise required to protect sensitive data from adversaries is far less with PAC Privacy than with other approaches. This could help engineers create machine-learning models that provably hide training data, while maintaining accuracy in real-world settings…

A fundamental question in data privacy is: How much sensitive data could an adversary recover from a machine-learning model with noise added to it?

Differential Privacy, one popular privacy definition, says privacy is achieved if an adversary who observes the released model cannot infer whether an arbitrary individual’s data is used for the training processing. But provably preventing an adversary from distinguishing data usage often requires large amounts of noise to obscure it. This noise reduces the model’s accuracy.

PAC Privacy looks at the problem a bit differently. It characterizes how hard it would be for an adversary to reconstruct any part of randomly sampled or generated sensitive data after noise has been added, rather than only focusing on the distinguishability problem…(More)”

AI and the automation of work


Essay by Benedict Evans: “…We should start by remembering that we’ve been automating work for 200 years. Every time we go through a wave of automation, whole classes of jobs go away, but new classes of jobs get created. There is frictional pain and dislocation in that process, and sometimes the new jobs go to different people in different places, but over time the total number of jobs doesn’t go down, and we have all become more prosperous.

When this is happening to your own generation, it seems natural and intuitive to worry that this time, there aren’t going to be those new jobs. We can see the jobs that are going away, but we can’t predict what the new jobs will be, and often they don’t exist yet. We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?

At this point, any first-year economics student will tell us that this is answered by, amongst other things, the ‘Lump of Labour’ fallacy.

The Lump of Labour fallacy is the misconception that there is a fixed amount of work to be done, and that if some work is taken by a machine then there will be less work for people. But if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs. The efficient gain isn’t confined to the shoe: generally, it ripples outward through the economy and creates new prosperity and new jobs. So, we don’t know what the new jobs will be, but we have a model that says, not just that there always have been new jobs, but why that is inherent in the process. Don’t worry about AI!The most fundamental challenge to this model today, I think, is to say that no, what’s really been happening for the last 200 years of automation is that we’ve been moving up the scale of human capability…(More)”.