Stefaan Verhulst
About: “The Food Systems Dashboard combines data from multiple sources to give users a complete view of food systems. Users can compare components of food systems across countries and regions. They can also identify and prioritize ways to sustainably improve diets and nutrition in their food systems.
Dashboards are useful tools that help users visualize and understand key information for complex systems. Users can track progress to see if policies or other interventions are working at a country or regional level
In recent years, the public health and nutrition communities have used dashboards to track the progress of health goals and interventions, including the Sustainable Development Goals. To our knowledge, this is the first dashboard that collects country-level data across all components of the food system.
The Dashboard contains over 150 indicators that measure components, drivers, and outcomes of food systems at the country level. As new indicators and data become available, the Dashboard will be updated. Most data used for the Dashboard is open source and available to download directly from the website. Data is pooled from FAO, Euromonitor International, World Bank, and other global and regional data sources….(More)”.
Paper by Amanda J. Porter, Philipp Tuertscher, and Marleen Huysman: “One approach for tackling grand challenges that is gaining traction in recent management literature is robust action: by allowing diverse stakeholders to engage with novel ideas, initiatives can cultivate successful ideas that yield greater impact. However, a potential pitfall of robust action is the length of time it takes to generate momentum. Crowdsourcing, we argue, is a valuable tool that can scale the generation of impact from robust action.
We studied an award‐winning environmental sustainability crowdsourcing initiative and found that robust action principles were indeed successful in attracting a diverse stakeholder network to generate novel ideas and develop these into sustainable solutions. Yet we also observed that the momentum and novelty generated was at risk of getting lost as the actors and their roles changed frequently throughout the process. We show the vital importance of robust action principles for connecting ideas and actors across crowdsourcing phases. These observations allow us to make a contribution to extant theory by explaining the micro‐dynamics of scaling robust action’s impact over time…(More)”.
Matthew Hutson at Science: “Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim—and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks—software architectures that loosely mimic the brain. “Fifty papers in,” he says, “it became clear that it wasn’t obvious what the state of the art even was.”
The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly—and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result, presented in March at the Machine Learning and Systems conference, surprised Blalock’s Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. “It’s the old saw, right?” Guttag said. “If you can’t measure something, it’s hard to make it better.”
Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in 2009.” Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since 2006. “There’s always been these waves of hype,” Musgrave says….(More)”.
Book by Hans Hansen: “Texas prosecutors are powerful: in cases where they seek capital punishment, the defendant is sentenced to death over ninety percent of the time. When management professor Hans Hansen joined Texas’s newly formed death penalty defense team to rethink their approach, they faced almost insurmountable odds. Yet while Hansen was working with the office, they won seventy of seventy-one cases by changing the narrative for death penalty defense. To date, they have succeeded in preventing well over one hundred executions—demonstrating the importance of changing the narrative to change our world.
In this book, Hansen offers readers a powerful model for creating significant organizational, social, and institutional change. He unpacks the lessons of the fight to change capital punishment in Texas—juxtaposing life-and-death decisions with the efforts to achieve a cultural shift at Uber. Hansen reveals how narratives shape our everyday lives and how we can construct new narratives to enact positive change. This narrative change model can be used to transform corporate cultures, improve public services, encourage innovation, craft a brand, or even develop your own leadership.
Narrative Change provides an unparalleled window into an innovative model of change while telling powerful stories of a fight against injustice. It reminds us that what matters most for any organization, community, or person is the story we tell about ourselves—and the most effective way to shake things up is by changing the story….(More)”.
Turing Institute: “…Policy Priority Inference builds on a behavioural computational model, taking into account the learning process of public officials, coordination problems, incomplete information, and imperfect governmental monitoring mechanisms. The approach is a unique mix of economic theory, behavioural economics, network science and agent-based modelling. The data that feeds the model for a specific country (or a sub-national unit, such as a state) includes measures of the country’s DIs and how they have moved over the years, specified government policy goals in relation to DIs, the quality of government monitoring of expenditure, and the quality of the country’s rule of law.
From these data alone – and, crucially, with no specific information on government expenditure, which is rarely made available – the model can infer the transformative resources a country has historically allocated to transform its SDGs, and assess the importance of SDG interlinkages between DIs. Importantly, it can also reveal where previously hidden inefficiencies lie.
How does it work? The researchers modelled the socioeconomic mechanisms of the policy-making process using agent-computing simulation. They created a simulator featuring an agent called “Government”, which makes decisions about how to allocate public expenditure, and agents called “Bureaucrats”, each of which is essentially a policy-maker linked to a single DI. If a Bureaucrat is allocated some resource, they will use a portion of it to improve their DI, with the rest lost to some degree of inefficiency (in reality, inefficiencies range from simple corruption to poor quality policies and inefficient government departments).
How much resource a Bureaucrat puts towards moving their DI depends on that agent’s experience: if becoming inefficient pays off, they’ll keep doing it. During the process, Government monitors the Bureaucrats, occasionally punishing inefficient ones, who may then improve their behaviour. In the model, a Bureaucrat’s chances of getting caught is linked to the quality of a government’s real-world monitoring of expenditure, and the extent to which they are punished is reflected in the strength of that country’s rule of law.

When the historical movements of a country’s DIs are reproduced through the internal workings of the model, the researchers have a powerful proxy for the real-world relationships between government activity, the movement of DIs, and the effects of the interlinkages between DIs, all of which are unique to that country. “Once we can match outcomes, we can discern something that’s going on in reality. But the fact that the method is matching the dynamics of real-world development indicators is just one of multiple ways that we validate our results,” Guerrero notes. This proxy can then be used to project which policy areas should be prioritised in future to best achieve the government’s specified development goals, including predictions of likely timescales.
What’s more, in combination with techniques from evolutionary computation, the model can identify DIs that are linked to large positive spillover effects. These DIs are dubbed “accelerators”. Targeting government resources at such development accelerators fosters not only more rapid results, but also more generalised development…(More)”.
Article by Elisa Minsart and Vincent Jacquet: “Amidst wide public disillusionment with the institutions of representative democracy, political scientists, campaigners and politicians have intensified efforts to find an effective mechanism to narrow the gap between citizens and those who govern them. One of the most popular remedies in recent years – and one frequently touted as a way to break the Brexit impasse encountered by the UK political class in 2016-19 – is that of citizens’ assemblies. These deliberative forums gather diversified samples of the population, recruited through a process of random selection. Citizens who participate meet experts, deliberate on a specific public issue and make a range of recommendations for policy-making. Citizens’ assemblies are flourishing in many representative democracies – not least in the UK, with the current Climate Assembly UK and Citizens’ Assembly of Scotland. They show that citizens are able to deliberate on complex political issues and to deliver original proposals.
For several years now, some public leaders, scholars and politicians have sought to integrate these democratic innovations into more traditional political structures. Belgium recently made a step in this direction. Each of Belgium’s three regions has its own parliament, with full legislative powers: on 13 November 2019, a proposition was approved to modify how the Parliament of the Brussels Region operates. The reform mandates the establishment of joint deliberative committees, on which members of the public will serve alongside elected representatives. This will enable ordinary people to deliberate with MPs on preselected themes and to formulate recommendations. The details of the process are currently still being drafted and the first commission is expected to launch at the end of 2020. Despite the COVID-19 crisis, drafting and negotiations with other parties have not been interrupted thanks to an online platform and a videoconference facility.
This experience has been inspired by other initiatives organised in Belgium. In 2011, the G1000 initiative brought together more than 700 randomly selected citizens to debate on different topics. This grassroots experiment attracted lots of public attention. In its aftermath, the different parliaments of the country launched their own citizens’ assemblies, designed to tackle specific local issues. Some international experiences also inspired the Brussels Region, in particular the first Irish Constitutional Convention (2012–2014). This assembly was composed of both elected representatives and randomly selected citizens, and led directly to a referendum that approved the legalisation of same-sex marriage. However, the present joint committees go well beyond these initiatives. Whereas both of these predecessors were ad hoc initiatives designed to resolve particular problems, the Brussels committees will be permanent and hosted at the heart of the parliament. Both of these aspects make the new committees a major innovation and entirely different from the predecessors that helped inspire them…(More)”.
Zeger van der Wal and Mehmet Demircioglu in the Australian Journal of Public Administration (AJPA): “Are ethical public organisations more likely to realize innovation? The public administration literature is ambiguous about this relationship, with evidence being largely anecdotal and focused mainly on the ethical implications of business‐like behaviour and positive deviance, rather than how ethical behaviour and culture may contribute to innovation.
In this paper we examine the effects of ethical culture and ethical leadership on reported realized innovation, using 2017 survey data from the Australia Public Service Commission (n = 80,316). Our findings show that both ethical culture at the working group‐level and agency‐level as well as ethical leadership have significant positive associations with realized innovation in working groups. The findings are robust across agency, work location, job level, tenure, education, and gender and across different samples. We conclude our paper with theoretical and practical implications of our research findings…(More)”.
Paper by the The LSE GV314 Group: “In the United Kingdom, the influence of parliamentary select committees on policy depends substantially on the ‘seriousness’ with which they approach the task of gathering and evaluating a wide range of evidence and producing reports and recommendations based on it. However, select committees are often charged with being concerned with ‘political theatre’ and ‘grandstanding’ rather than producing evidence-based policy recommendations. This study, based on a survey of 919 ‘discretionary’ witnesses, including those submitting written and oral evidence, examines the case for arguing that there is political bias and grandstanding in the way select committees go about selecting witnesses, interrogating them and using their evidence to put reports together. While the research finds some evidence of such ‘grandstanding’ it does not appear to be strong enough to suggest that the role of select committees is compromised as a crowdsourcer of evidence….(More)”.
Paper by Maciej Kuziemski and Gianluca Misuraca: “The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency – or in other words – to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks.
As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector.
In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine – trust in governance systems and democracy….(More)”.
Stefaan G. Verhulst and Andrew Zahuranec at Data & Policy blog: “There has been a rapid increase in the number of data-driven projects and tools released to contain the spread of COVID-19. Over the last three months, governments, tech companies, civic groups, and international agencies have launched hundreds of initiatives. These efforts range from simple visualizations of public health data to complex analyses of travel patterns.
When designed responsibly, data-driven initiatives could provide the public and their leaders the ability to be more effective in addressing the virus. The Atlantic andNew York Times have both published work that relies on innovative data use. These and other examples, detailed in our #Data4COVID19 repository, can fill vital gaps in our understanding and allow us to better respond and recover to the crisis.
But data is not without risk. Collecting, processing, analyzing and using any type of data, no matter how good intention of its users, can lead to harmful ends. Vulnerable groups can be excluded. Analysis can be biased. Data use can reveal sensitive information about people and locations. In addressing all these hazards, organizations need to be intentional in how they work throughout the data lifecycle.
Decision Provenance: Documenting decisions and decision makers across the Data Life Cycle
Unfortunately the individuals and teams responsible for making these design decisions at each critical point of the data lifecycle are rarely identified or recognized by all those interacting with these data systems.
The lack of visibility into the origins of these decisions can impact professional accountability negatively as well as limit the ability of actors to identify the optimal intervention points for mitigating data risks and to avoid missed use of potentially impactful data. Tracking decision provenance is essential.
As Jatinder Singh, Jennifer Cobbe, and Chris Norval of the University of Cambridge explain, decision provenance refers to tracking and recording decisions about the collection, processing, sharing, analyzing, and use of data. It involves instituting mechanisms to force individuals to explain how and why they acted. It is about using documentation to provide transparency and oversight in the decision-making process for everyone inside and outside an organization.
Toward that end, The GovLab at NYU Tandon developed the Decision Provenance Mapping. We designed this tool for designated data stewards tasked with coordinating the responsible use of data across organizational priorities and departments….(More)”