How citywide data strategies can connect the dots, drive results

Blog by Bloomberg Cities Network: “Data is more central than ever to improving service delivery, managing performance, and identifying opportunities that better serve residents. That’s why a growing number of cities are adding a new tool to their arsenal—the citywide data strategy—to provide teams with a holistic view of data efforts and then lay out a roadmap for scaling successful approaches throughout city hall.

These comprehensive strategies are increasingly “critical to help mayors reach their visions,” according to Amy Edward Holmes, executive director The Bloomberg Center for Government Excellence at John Hopkins University, which is helping dozens of cities across the Americas up their data games as part of the Bloomberg Philanthropies City Data Alliance (CDA).

Bloomberg Cities spoke with experts in the field and leaders in pioneering cities to learn more about the importance of citywide data strategies and how they can help:

  • Turn “pockets of promise” into citywide strengths;
  • Build upon and consolidate other citywide strategic efforts; 
  • Improve performance management and service delivery;
  • Align staff data capabilities with city needs;
  • Drive lasting cultural change through leadership commitment…(More)”.

Evidence-based policymaking in the legislatures

Blog by Ville Aula: “Evidence-based policymaking is a popular approach to policy that has received widespread public attention during the COVID-19 pandemic, as well as in the fight against climate change. It argues that policy choices based on rigorous, preferably scientific evidence should be given priority over choices based on other types of justification. However, delegating policymaking solely to researchers goes against the idea that policies are determined democratically.

In my recent article published in Policy & Politics: Evidence-based policymaking in the legislatures we explored the tension between politics and evidence in the national legislatures. While evidence-based policymaking has been extensively studied within governments, the legislative arena has received much less attention. The focus of the study was on understanding how legislators, legislative committees, and political parties together shape the use of evidence. We also wanted to explore how the interviewees understand timeliness and relevance of evidence, because lack of time is a key challenge within legislatures. The study is based on 39 interviews with legislators, party employees, and civil servants in Eduskunta, the national Parliament of Finland.

Our findings show that, in Finland, political parties play a key role in collecting, processing, and brokering evidence within legislatures. Finnish political parties maintain detailed policy programmes that guide their work in the legislature. The programmes are often based on extensive consultations with expert networks of the party and evidence collection from key stakeholders. Political parties are not ready to review these programmes every time new evidence is offered to them. This reluctance can give the appearance that parties do not want to follow evidence. Nevertheless, reluctance is oftens necessary for political parties to maintain stable policy platforms while navigating uncertainty amidst competing sources of evidence. Party positions can be based on extensive evidence and expertise even if some other sources of evidence contradict them.

Partisan expert networks and policy experts employed by political parties in particular appear to be crucial in formulating the evidence-base of policy programmes. The findings suggest that these groups should be a new target audience for evidence brokering. Yet political parties, their employees, and their networks have rarely been considered in research on evidence-based policymaking.

Turning to the question of timeliness we found, as expected, that use of evidence in the Parliament of Finland is driven by short-term reactiveness. However, in our study, we also found that short-term reactiveness and the notion of timeliness can refer to time windows ranging from months to weeks and, sometimes, merely days. The common recommendation by policy scholars to boost uptake of evidence by making it timely and relevant is therefore far from simple…(More)”.

AI could choke on its own exhaust as it fills the web

Article by Ina Fried and Scott Rosenberg: “Scott RosenbergThe internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.

What’s happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years’ time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.

  • That’s happening in a world that hasn’t yet figured out how to reliably label AI-generated output and differentiate it from human-created content.

The danger to human society is the now-familiar problem of information overload and degradation.

  • AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.
  • There’s also widespread fear that AI could undermine the jobs of people who create content today, from artists and performers to journalists, editors and publishers. The current strike by Hollywood actors and writers underlines this risk.

The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.

  • Model collapse” is researchers’ name for what happens to generative AI models, like OpenAI’s GPT-3 and GPT-4, when they’re trained using data produced by other AIs rather than human beings.
  • Feed a model enough of this “synthetic” data, and the quality of the AI’s answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the “tail” choices that keep their output interesting.
  • Model Autophagy Disorder, or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
  • “Habsburg AI” is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: “A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”…(More)”.

Toward Bridging the Data Divide

Blog by Randeep Sudan, Craig Hammer, and Yaroslav Eferin: “Developing countries face a data conundrum. Despite more data being available than ever in the world, low- and middle-income countries often lack adequate access to valuable data and struggle to fully use the data they have.

This seemingly paradoxical situation represents a data divide. The terms “digital divide” and “data divide” are often used interchangeably but differ. The digital divide is the gap between those with access to digital technologies and those without access. On the other hand, the data divide is the gap between those who have access to high-quality data and those who do not. The data divide can negatively skew development across countries and therefore is a serious issue that needs to be addressed…

The effects of the data divide are alarming, with low- and middle-income countries getting left behind. McKinsey estimates that 75% of the value that could be created through Generative AI (such as ChatGPT) would be in four areas of economic activity: customer operations, marketing and sales, software engineering, and research and development. They further estimate that Generative AI  could add between $2.6 trillion and $4.4 trillion in value in these four areas.

PWC estimates that approximately 70% of all economic value generated by AI will likely accrue to just two countries: the USA and China. These two countries account for nearly two-thirds of the world’s hyperscale data centers, high rates of 5G adoption, the highest number of AI researchers, and the most funding for AI startups. This situation creates serious concerns for growing global disparities in accessing benefits from data collection and processing, and the related generation of insights and opportunities. These disparities will only increase over time without deliberate efforts to counteract this imbalance…(More)”

When should states be creative, innovative or entrepreneurial – and when should they not?

Blog by Geoff Mulgan: “…So what about governments being entrepreneurial as opposed to creative and innovative? Here things get even trickier. The classic commentary on the subject was written by the great Jane Jacobs (in her book ‘Systems of Survival’). She pointed out the differences between what she called the ‘guardian syndrome’ and the ‘trader syndrome’. The first is common in governments, the second in business. She argued that all societies have to find a balance between these very different views of the world. The first is concerned with looking after things and protection, originally of land, and can be found in governments, ecological movements as well as aristocracies. The second is concerned with exchange and profit, and is the world of commerce and trade.

These each see the world in very different ways. But in practice they complement each other – indeed their complementarity is what helps societies to function.

In her view, however, fusions of the two tended to be malign pathologies, for example when businesses became like governments, running large areas of territory, or when governments start thinking like traders. Donald Trump was a classic example – who saw the government machine rather as an entrepreneur would see his own business. Silvio Berlusconi was another – a remarkable proportion of his initiatives were essentially designed to promote his businesses, or protect him from prosecution.

Jane Jacobs’ points become very obvious in some industries, like the contemporary digital industries that have become de facto utilities on which we depend every day. It remains far from clear that companies like Meta or Google appreciate that they risk becoming pathological fusions of business and government, without the mindsets appropriate to their new-found power.

The pathologies are also very visible in many parts of the world where the state runs a lot of industry, often with the military playing a leading role. Examples include Pakistan, Myanmar, China and Russia. In these cases public servants really have become entrepreneurs. In some cases – like Huawei – great businesses have been grown. But most of the time such fusions of government and entrepreneurialism tend towards corruption, and predatory extraction of value, because when the state’s monopoly of coercion connects to the power to make money abuses are inevitable.

There may be occasional examples where states should be entrepreneurial at least in mindset – spinning off a function or using some of the ethos of a start-up, for example to create a new digital service. But in such cases very tight rules are vital to avoid abuse, so that if, for example, a part of the state is spun out it doesn’t do so with advantages or easy money or legally guaranteed monopolies or inflated salaries. Much depends on whether you use the word entrepreneurial in a precise sense (the first definition that comes up on Google is: ‘characterized by the taking of financial risks in the hope of profit’) or as a much looser synonym for being innovative or problem-solving…(More)”.

Using Data Science for Improving the Use of Scholarly Research in Public Policy

Blog by Basil Mahfouz: “Scientists worldwide published over 2.6 million papers in 2022 – Almost 5 papers per minute and more than double what they published in the year 2000. Are policy makers making the most of the wealth of available scientific knowledge? In this blog, we describe how we are applying data science methods on the bibliometric database of Elsevier’s International Centre for the Study of Research (ICSR) to analyse how scholarly research is being used by policy makers. More specifically, we will discuss how we are applying natural language processing and network dynamics to identify where there is policy action and also strong evidence; where there is policy interest but a lack of evidence; and where potential policies and strategies are not making full use of available knowledge or tools…(More)”.

The Importance of Purchase to Plate Data

Blog by Andrea Carlson and Thea Palmer Zimmerman: “…Because there can be economic and social barriers to maintaining a healthy diet, USDA promotes Food and Nutrition Security so that everyone has consistent and equitable access to healthy, safe, and affordable foods that promote optimal health and well-being. A set of data tools called the Purchase to Plate Suite (PPS) supports these goals by enabling the update of the Thrifty Food Plan (TFP), which estimates how much a budget-conscious family of four needs to spend on groceries to ensure a healthy diet. The TFP market basket – consisting of the specific amounts of various food categories required by the plan – forms the basis of the maximum allotment for the Supplemental Nutrition Assistance Program (SNAP, formerly known as the “Food Stamps” program), which provided financial support towards the cost of groceries for over 41 million individuals in almost 22 million households in fiscal year 2022.

The 2018 Farm Act (Agriculture Improvement Act of 2018) requires that USDA reevaluate the TFP every five years using current food composition, consumption patterns, dietary guidance, and food prices, and using approved scientific methods. USDA’s Economic Research Service (ERS) was charged with estimating the current food prices using retail food scanner data (Levin et al. 2018Muth et al. 2016) and utilized the PPS for this task. The most recent TFP update was released in August 2021 and the revised cost of the market basket was the first non-inflation adjustment increase in benefits for SNAP in over 40 years (US Department of Agriculture 2021).

The PPS combines datasets to enhance research related to the economics of food and nutrition. There are four primary components of the suite:

  • Purchase to Plate Crosswalk (PPC),
  • Purchase to Plate Price Tool (PPPT),
  • Purchase to Plate National Average Prices (PP-NAP) for the National Health and Nutrition Examination Survey (NHANES), and
  • Purchase to Plate Ingredient Tool (PPIT)..(More)”

Integrating AI into Urban Planning Workflows: Democracy Over Authoritarianism

Essay by Tyler Hinkle: “As AI tools become integrated into urban planning, a dual narrative of promise and potential pitfalls emerges. These tools offer unprecedented efficiency, creativity, and data analysis, yet if not guided by ethical considerations, they could inadvertently lead to exclusion, manipulation, and surveillance.

While AI, exemplified by tools like NovelAI, holds the potential to aggregate and synthesize public input, there’s a risk of suppressing genuine human voices in favor of algorithmic consensus. This could create a future urban landscape devoid of cultural depth and diversity, echoing historical authoritarianism.

In a potential dystopian scenario, an AI-based planning software gains access to all smart city devices, amassing data to reshape communities without consulting their residents. This data-driven transformation, devoid of human input, risks eroding the essence of community identity, autonomy, and shared decision-making. Imagine AI altering traffic flow, adjusting public transportation routes, or even redesigning public spaces based solely on data patterns, disregarding the unique needs and desires of the people who call that community home.

However, an optimistic approach guided by ethical principles can pave the way for a brighter future. Integrating AI with democratic ideals, akin to Fishkin’s deliberative democracy, can amplify citizens’ voices rather than replacing them. AI-driven deliberation can become a powerful vehicle for community engagement, transforming Arnstein’s ladder of citizen participation into a true instrument of empowerment. In addition, echoing the calls for alignment to be addresses holistically for AI, there will be alignment issues with AI as it becomes integrated into urban planning. We must take the time to ensure AI is properly aligned so it is a tool to help communities and not hurt them.

By treading carefully and embedding ethical considerations at the core, we can unleash AI’s potential to construct communities that are efficient, diverse, and resilient, while ensuring that democratic values remain paramount…(More)”.

Designing Research For Impact

Blog by Duncan Green: “The vast majority of proposals seem to conflate impact with research dissemination (a heroic leap of faith – changing the world one seminar at a time), or to outsource impact to partners such as NGOs and thinktanks.

Of the two, the latter looks more promising, but then the funder should ask to see both evidence of genuine buy-in from the partners, and appropriate budget for the work. Bringing in a couple of NGOs as ‘bid candy’ with little money attached is unlikely to produce much impact.

There is plenty written on how to genuinely design research for impact, e.g. this chapter from a number of Oxfam colleagues on its experience, or How to Engage Policy Makers with your Research (an excellent book I reviewed recently and on the LSE Review of Books). In brief, proposals should:

  • Identify the kind(s) of impacts being sought: policy change, attitudinal shifts (public or among decision makers), implementation of existing laws and policies etc.
  • Provide a stakeholder mapping of the positions of key players around those impacts – supporters, waverers and opponents.
  • Explain how the research plans to target some/all of these different individuals/groups, including during the research process itself (not just ‘who do we send the papers to once they’re published?’).
  • Which messengers/intermediaries will be recruited to convey the research to the relevant targets (researchers themselves are not always the best-placed to persuade them)
  • Potential ‘critical junctures’ such as crises or changes of political leadership that could open windows of opportunity for uptake, and how the research team is set up to spot and respond to them.
  • Anticipated attacks/backlash against research on sensitive issues and how the researchers plan to respond
  • Plans for review and adaptation of the influencing strategy

I am not arguing for proposals to indicate specific impact outcomes – most systems are way too complex for that. But, an intentional plan based on asking questions on the points above would probably help researchers improve their chances of impact.

Based on the conversations I’ve been having, I also have some thoughts on what is blocking progress.

Impact is still too often seen as an annoying hoop to jump through at the funding stage (and then largely forgotten, at least until reporting at the end of the project). The incentives are largely personal/moral (‘I want to make a difference’), whereas the weight of professional incentives are around accumulating academic publications and earning the approval of peers (hence the focus on seminars).

incentives are largely personal/moral (‘I want to make a difference’), whereas the weight of professional incentives are around accumulating academic publications

The timeline of advocacy, with its focus on ‘dancing with the system’, jumping on unexpected windows of opportunity etc, does not mesh with the relentless but slow pressure to write and publish. An academic is likely to pay a price if they drop their current research plans to rehash prior work to take advantage of a brief policy ‘window of opportunity’.

There is still some residual snobbery, at least in some disciplines. You still hear terms like ‘media don’, which is not meant as a compliment. For instance, my friend Ha-Joon Chang is now an economics professor at SOAS, but what on earth was Cambridge University thinking not making a global public intellectual and brilliant mind into a prof, while he was there?

True, there is also some more justified concern that designing research for impact can damage the research’s objectivity/credibility – hence the desire to pull in NGOs and thinktanks as intermediaries. But, this conversation still feels messy and unresolved, at least in the UK…(More)”.

Valuing Data: The Role of Satellite Data in Halting the Transmission of Polio in Nigeria

Article by Mariel Borowitz, Janet Zhou, Krystal Azelton & Isabelle-Yara Nassar: “There are more than 1,000 satellites in orbit right now collecting data about what’s happening on the Earth. These include government and commercial satellites that can improve our understanding of climate change; monitor droughts, floods, and forest fires; examine global agricultural output; identify productive locations for fishing or mining; and many other purposes. We know the data provided by these satellites is important, yet it is very difficult to determine the exact value that each of these systems provides. However, with only a vague sense of “value,” it is hard for policymakers to ensure they are making the right investments in Earth observing satellites.

NASA’s Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), carried out in collaboration with Resources for the Future, aimed to address this by analyzing specific use cases of satellite data to determine their monetary value. VALUABLES proposed a “value of information” approach focusing on cases in which satellite data informed a specific decision. Researchers could then compare the outcome of that decision with what would have occurredif no satellite data had been available. Our project, which was funded under the VALUABLES program, examined how satellite data contributed to efforts to halt the transmission of Polio in Nigeria…(More)”