Blog by Basil Mahfouz: “Scientists worldwide published over 2.6 million papers in 2022 – Almost 5 papers per minute and more than double what they published in the year 2000. Are policy makers making the most of the wealth of available scientific knowledge? In this blog, we describe how we are applying data science methods on the bibliometric database of Elsevier’s International Centre for the Study of Research (ICSR) to analyse how scholarly research is being used by policy makers. More specifically, we will discuss how we are applying natural language processing and network dynamics to identify where there is policy action and also strong evidence; where there is policy interest but a lack of evidence; and where potential policies and strategies are not making full use of available knowledge or tools…(More)”.
The Importance of Purchase to Plate Data
Blog by Andrea Carlson and Thea Palmer Zimmerman: “…Because there can be economic and social barriers to maintaining a healthy diet, USDA promotes Food and Nutrition Security so that everyone has consistent and equitable access to healthy, safe, and affordable foods that promote optimal health and well-being. A set of data tools called the Purchase to Plate Suite (PPS) supports these goals by enabling the update of the Thrifty Food Plan (TFP), which estimates how much a budget-conscious family of four needs to spend on groceries to ensure a healthy diet. The TFP market basket – consisting of the specific amounts of various food categories required by the plan – forms the basis of the maximum allotment for the Supplemental Nutrition Assistance Program (SNAP, formerly known as the “Food Stamps” program), which provided financial support towards the cost of groceries for over 41 million individuals in almost 22 million households in fiscal year 2022.
The 2018 Farm Act (Agriculture Improvement Act of 2018) requires that USDA reevaluate the TFP every five years using current food composition, consumption patterns, dietary guidance, and food prices, and using approved scientific methods. USDA’s Economic Research Service (ERS) was charged with estimating the current food prices using retail food scanner data (Levin et al. 2018; Muth et al. 2016) and utilized the PPS for this task. The most recent TFP update was released in August 2021 and the revised cost of the market basket was the first non-inflation adjustment increase in benefits for SNAP in over 40 years (US Department of Agriculture 2021).
The PPS combines datasets to enhance research related to the economics of food and nutrition. There are four primary components of the suite:
- Purchase to Plate Crosswalk (PPC),
- Purchase to Plate Price Tool (PPPT),
- Purchase to Plate National Average Prices (PP-NAP) for the National Health and Nutrition Examination Survey (NHANES), and
- Purchase to Plate Ingredient Tool (PPIT)..(More)”
Integrating AI into Urban Planning Workflows: Democracy Over Authoritarianism
Essay by Tyler Hinkle: “As AI tools become integrated into urban planning, a dual narrative of promise and potential pitfalls emerges. These tools offer unprecedented efficiency, creativity, and data analysis, yet if not guided by ethical considerations, they could inadvertently lead to exclusion, manipulation, and surveillance.
While AI, exemplified by tools like NovelAI, holds the potential to aggregate and synthesize public input, there’s a risk of suppressing genuine human voices in favor of algorithmic consensus. This could create a future urban landscape devoid of cultural depth and diversity, echoing historical authoritarianism.
In a potential dystopian scenario, an AI-based planning software gains access to all smart city devices, amassing data to reshape communities without consulting their residents. This data-driven transformation, devoid of human input, risks eroding the essence of community identity, autonomy, and shared decision-making. Imagine AI altering traffic flow, adjusting public transportation routes, or even redesigning public spaces based solely on data patterns, disregarding the unique needs and desires of the people who call that community home.
However, an optimistic approach guided by ethical principles can pave the way for a brighter future. Integrating AI with democratic ideals, akin to Fishkin’s deliberative democracy, can amplify citizens’ voices rather than replacing them. AI-driven deliberation can become a powerful vehicle for community engagement, transforming Arnstein’s ladder of citizen participation into a true instrument of empowerment. In addition, echoing the calls for alignment to be addresses holistically for AI, there will be alignment issues with AI as it becomes integrated into urban planning. We must take the time to ensure AI is properly aligned so it is a tool to help communities and not hurt them.
By treading carefully and embedding ethical considerations at the core, we can unleash AI’s potential to construct communities that are efficient, diverse, and resilient, while ensuring that democratic values remain paramount…(More)”.
Designing Research For Impact
Blog by Duncan Green: “The vast majority of proposals seem to conflate impact with research dissemination (a heroic leap of faith – changing the world one seminar at a time), or to outsource impact to partners such as NGOs and thinktanks.
Of the two, the latter looks more promising, but then the funder should ask to see both evidence of genuine buy-in from the partners, and appropriate budget for the work. Bringing in a couple of NGOs as ‘bid candy’ with little money attached is unlikely to produce much impact.
There is plenty written on how to genuinely design research for impact, e.g. this chapter from a number of Oxfam colleagues on its experience, or How to Engage Policy Makers with your Research (an excellent book I reviewed recently and on the LSE Review of Books). In brief, proposals should:
- Identify the kind(s) of impacts being sought: policy change, attitudinal shifts (public or among decision makers), implementation of existing laws and policies etc.
- Provide a stakeholder mapping of the positions of key players around those impacts – supporters, waverers and opponents.
- Explain how the research plans to target some/all of these different individuals/groups, including during the research process itself (not just ‘who do we send the papers to once they’re published?’).
- Which messengers/intermediaries will be recruited to convey the research to the relevant targets (researchers themselves are not always the best-placed to persuade them)
- Potential ‘critical junctures’ such as crises or changes of political leadership that could open windows of opportunity for uptake, and how the research team is set up to spot and respond to them.
- Anticipated attacks/backlash against research on sensitive issues and how the researchers plan to respond
- Plans for review and adaptation of the influencing strategy
I am not arguing for proposals to indicate specific impact outcomes – most systems are way too complex for that. But, an intentional plan based on asking questions on the points above would probably help researchers improve their chances of impact.
Based on the conversations I’ve been having, I also have some thoughts on what is blocking progress.
Impact is still too often seen as an annoying hoop to jump through at the funding stage (and then largely forgotten, at least until reporting at the end of the project). The incentives are largely personal/moral (‘I want to make a difference’), whereas the weight of professional incentives are around accumulating academic publications and earning the approval of peers (hence the focus on seminars).
incentives are largely personal/moral (‘I want to make a difference’), whereas the weight of professional incentives are around accumulating academic publications
The timeline of advocacy, with its focus on ‘dancing with the system’, jumping on unexpected windows of opportunity etc, does not mesh with the relentless but slow pressure to write and publish. An academic is likely to pay a price if they drop their current research plans to rehash prior work to take advantage of a brief policy ‘window of opportunity’.
There is still some residual snobbery, at least in some disciplines. You still hear terms like ‘media don’, which is not meant as a compliment. For instance, my friend Ha-Joon Chang is now an economics professor at SOAS, but what on earth was Cambridge University thinking not making a global public intellectual and brilliant mind into a prof, while he was there?
True, there is also some more justified concern that designing research for impact can damage the research’s objectivity/credibility – hence the desire to pull in NGOs and thinktanks as intermediaries. But, this conversation still feels messy and unresolved, at least in the UK…(More)”.
Valuing Data: The Role of Satellite Data in Halting the Transmission of Polio in Nigeria
Article by Mariel Borowitz, Janet Zhou, Krystal Azelton & Isabelle-Yara Nassar: “There are more than 1,000 satellites in orbit right now collecting data about what’s happening on the Earth. These include government and commercial satellites that can improve our understanding of climate change; monitor droughts, floods, and forest fires; examine global agricultural output; identify productive locations for fishing or mining; and many other purposes. We know the data provided by these satellites is important, yet it is very difficult to determine the exact value that each of these systems provides. However, with only a vague sense of “value,” it is hard for policymakers to ensure they are making the right investments in Earth observing satellites.
NASA’s Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), carried out in collaboration with Resources for the Future, aimed to address this by analyzing specific use cases of satellite data to determine their monetary value. VALUABLES proposed a “value of information” approach focusing on cases in which satellite data informed a specific decision. Researchers could then compare the outcome of that decision with what would have occurredif no satellite data had been available. Our project, which was funded under the VALUABLES program, examined how satellite data contributed to efforts to halt the transmission of Polio in Nigeria…(More)”
Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs
Article by Katya Klinova and Anton Korinek: “…Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.
In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.
Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement…(More)”.
Data can help decarbonize cities – let us explain
Article by Stephen Lorimer and Andrew Collinge: “The University of Birmingham, Alan Turing Institute and Centre for Net Zero are working together, using a tool developed by the Centre, called Faraday, to model a more detailed understanding of energy flows within the district and between it and the neighbouring 8,000 residents. Faraday is a generative AI model trained on one of the UK’s largest smart metre datasets. The model is helping to unlock a more granular view of energy sources and changing energy usage, providing the basis for modelling future energy consumption and local smart grid management.
The partners are investigating the role that trusted data aggregators can play if they can take raw data and desensitize it to a point where it can be shared without eroding consumer privacy or commercial advantage.
Data is central to both initiatives and all cities seeking a renewable energy transition. But there are issues to address, such as common data standards, governance and data competency frameworks (especially across the built environment supply chain)…
Building the governance, standards and culture that delivers confidence in energy data exchange is essential to maximizing the potential of carbon reduction technologies. This framework will ultimately support efficient supply chains and coordinate market activity. There are lessons from the Open Banking initiative, which provided the framework for traditional financial institutions, fintech and regulators to deliver innovation in financial products and services with carefully shared consumer data.
In the energy domain, there are numerous advantageous aspects to data sharing. It helps overcome barriers in the product supply chain, from materials to low-carbon technologies (heat pumps, smart thermostats, electric vehicle chargers etc). Free and Open-Source Software (FOSS) providers can use data to support installers and property owners.
Data interoperability allows third-party products and services to communicate with any end-user device through open or proprietary Internet of Things gateway platforms such as Tuya or IFTTT. A growing bank of post-installation data on the operation of buildings (such as energy efficiency and air quality) will boost confidence in the future quality of retrofits and make for easier decisions on planning approval and grid connections. Finally, data is increasingly considered key in securing the financing and private sector investment crucial to the net zero effort.
None of the above is easy. Organizational and technical complexity can slow progress but cities must be at the forefront of efforts to coordinate the energy data ecosystem and make the case for “data for decarbonization.”…(More)”.
How data-savvy cities can tackle growing ethical considerations
Bloomberg Cities Network: “Technology for collecting, combining, and analyzing data is moving quickly, putting cities in a good position to use data to innovate in how they solve problems. However, it also places a responsibility on them to do so in a manner that does not undermine public trust.
To help local governments deal with these issues, the London Office of Technology and Innovation, or LOTI, has a set of recommendations for data ethics capabilities in local government. One of those recommendations—for cities that are mature in their work in this area—is to hire a dedicated data ethicist.
LOTI exists to support dozens of local boroughs across London in their collective efforts to tackle big challenges. As part of that mission, LOTI hired Sam Nutt to serve as a data ethicist that local leaders can call on. The move reflected the reality that most local councils don’t have the capacity to have their own data ethicist on staff and it put LOTI in a position to experiment, learn, and share out lessons learned from the approach.
Nutt’s role provides a potential framework other cities looking to hire data ethicists can build on. His position is based on job specifications for data ethicists published by the UK government. He says his work falls into three general areas. First, he helps local councils work through ethical questions surrounding individual data projects. Second, he helps them develop more high-level policies, such as the Borough of Camden’s Data Charter. And third, he provides guidance on how to engage staff, residents, and stakeholders around the implications of using technology, including research on what’s new in the field.
As an example of the kinds of ethical issues that he consults on, Nutt cites repairs in publicly subsidized housing. Local leaders are interested in using algorithms to help them prioritize use of scarce maintenance resources. But doing so raises questions about what criteria should be used to bump one resident’s needs above another’s.
“If you prioritize, for example, the likelihood of a resident making a complaint, you may be baking in an existing social inequality, because some communities do not feel as empowered to make complaints as others,” Nutt says. “So it’s thinking through what the ethical considerations might be in terms of choices of data and how you use it, and giving advice to prevent potential biases from creeping in.”
Nutt acknowledges that most cities are too resource constrained to hire a staff data ethicist. What matters most, he says, is that local governments create mechanisms for ensuring that ethical considerations of their choices with data and technology are considered. “The solution will never be that everyone has to hire a data ethicist,” Nutt says. “The solution is really to build ethics into your default ways of working with data.”
Stefaan Verhulst agrees. “The question for government is: Is ethics a position? A function? Or an institutional responsibility?” says Verhulst, Co-Founder of The GovLab and Director of its Data Program. The key is “to figure out how we institutionalize this in a meaningful way so that we can always check the pulse and get rapid input with regard to the social license for doing certain kinds of things.”
As the data capabilities of local governments grow, it’s also important to empower all individuals working in government to understand ethical considerations within the work they’re doing, and to have clear guidelines and codes of conduct they can follow. LOTI’s data ethics recommendations note that hiring a data ethicist should not be an organization’s first step, in part because “it risks delegating ethics to a single individual when it should be in the domain of anyone using or managing data.”
Training staff is a big part of the equation. “It’s about making the culture of government sensitive to these issues,” Verhulst says, so “that people are aware.”..(More)”.
Innovation Can Reboot American Democracy
Blog by Suzette Brooks Masters: “A thriving multiracial pluralist democracy is an aspiration that many people share for America. Far from being inevitable, the path to such a future is uncertain.
To stretch how we think about American democracy’s future iterations and begin to imagine the contours of the new, we need to learn from what’s emergent. So I’m going to take you on a whirlwind tour of some experiments taking place here and abroad that are the bright spots illuminating possible futures ahead.
My comments are informed by a research report I wrote last year called Imagining Better Futures for American Democracy. I interviewed dozens of visionaries in a range of fields and with diverse perspectives about the future of our democracy and the role positive visioning and futures thinking could play in reinvigorating it.
As I discuss these bright spots, I want to emphasize that what is most certain now is the accelerating and destabilizing change we are experiencing. It’s critical therefore to develop systems, institutions, norms and mindsets to navigate that change boldly and responsibly, not pretend that tomorrow will continue to look like today.
Yet when paradigms shift, as they inevitably do and I would argue are right now, that’s a messy and confusing time that can cause lots of anxiety and disorientation. During these critical periods of transition, we must set aside or ‘hospice” some assumptions, mindsets, practices, and institutions, while midwifing, or welcoming in, new ones.
This is difficult to do in the best of times but can be especially so when, collectively, we suffer from a lack of imagination and vision about what American democracy could and should become.
It’s not all our fault — inertia, fear, distrust, cynicism, diagnosis paralysis, polarization, exceptionalism, parochialism, and a pervasive, dystopian media environment are dragging us down. They create very strong headwinds weakening both our appetite and our ability to dream bigger and imagine better futures ahead.
However, focusing on and amplifying promising innovations can change that dysfunctional dynamic by inspiring us and providing blueprints to act upon when the time is right.
Below I discuss two main types of innovations in the political sphere: election-related structural reforms and governance reforms, including new forms of civic engagement and government decision-making…(More)”.
A Comparative Perspective on AI Regulation
Blog by Itsiq Benizri, Arianna Evers, Shannon Togawa Mercer, Ali A. Jessani: “The question isn’t whether AI will be regulated, but how. Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches: The EU has put forth a broad and prescriptive proposal in the AI Act, which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The U.K., in turn, has committed to abstaining from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay. The United States, meanwhile, has pushed for national AI standards through the executive branch but also has adopted some AI-specific rules at the state level (both through comprehensive privacy legislation and for specific AI-related use cases). Between these three jurisdictions, there are multiple approaches to AI regulation that can help strike the balance between developing AI technology and ensuring that there is a framework in place to account for potential harms to consumers and others. Given the explosive popularity and development of AI in recent months, there is likely to be a strong push by companies, entrepreneurs, and tech leaders in the near future for additional clarity on AI. Regulators will have to answer these calls. Despite not knowing what AI regulation in the United States will look like in one year (let alone five), savvy AI users and developers should examine these early regulatory approaches to try and chart a thoughtful approach to AI…(More)”