Explore our articles
View All Results

Stefaan Verhulst

Andrew Jack at the Financial Times: “When Mozambique was hit by two cyclones in rapid succession last year — causing death and destruction from a natural disaster on a scale not seen in Africa for a generation — government officials added an unusual recruit to their relief efforts. Apart from the usual humanitarian and health agencies, the National Health Institute also turned to Zenysis, a Silicon Valley start-up.

As the UN and non-governmental organisations helped to rebuild lives and tackle outbreaks of disease including cholera, Zenysis began gathering and analysing large volumes of disparate data. “When we arrived, there were 400 new cases of cholera a day and they were doubling every 24 hours,” says Jonathan Stambolis, the company’s chief executive. “None of the data was shared [between agencies]. Our software harmonised and integrated fragmented sources to produce a coherent picture of the outbreak, the health system’s ability to respond and the resources available.

“Three and a half weeks later, they were able to get infections down to zero in most affected provinces,” he adds. The government attributed that achievement to the availability of high-quality data to brief the public and international partners.

“They co-ordinated the response in a way that drove infections down,” he says. Zenysis formed part of a “virtual control room”, integrating information to help decision makers understand what was happening in the worst hit areas, identify sources of water contamination and where to prioritise cholera vaccinations.

It supported an “mAlert system”, which integrated health surveillance data into a single platform for analysis. The output was daily reports distilled from data issued by health facilities and accommodation centres in affected areas, disease monitoring and surveillance from laboratory testing….(More)”.

How data analysis helped Mozambique stem a cholera outbreak

Article by Abdullah Almaatouq and Alex “Sandy” Pentland: “The idea of collective intelligence is not new. Research has long shown that in a wide range of settings, groups of people working together outperform individuals toiling alone. But how do drastic shifts in circumstances, such as people working mostly at a distance during the COVID-19 pandemic, affect the quality of collective decision-making? After all, public health decisions can be a matter of life and death, and business decisions in crisis periods can have lasting effects on the economy.

During a crisis, it’s crucial to manage the flow of ideas deliberatively and strategically so that communication pathways and decision-making are optimized. Our recently published research shows that optimal communication networks can emerge from within an organization when decision makers interact dynamically and receive frequent performance feedback. The results have practical implications for effective decision-making in times of dramatic change….

Our experiments illustrate the importance of dynamically configuring network structures and enabling decision makers to obtain useful, recurring feedback. But how do you apply such findings to real-world decision-making, whether remote or face to face, when constrained by a worldwide pandemic? In such an environment, connections among individuals, teams, and networks of teams must be continually reorganized in response to shifting circumstances and challenges. No single network structure is optimal for every decision, a fact that is clear in a variety of organizational contexts.

Public sector. Consider the teams of advisers working with governments in creating guidelines to flatten the curve and help restart national economies. The teams are frequently reconfigured to leverage pertinent expertise and integrate data from many domains. They get timely feedback on how decisions affect daily realities (rates of infection, hospitalization, death) — and then adjust recommended public health protocols accordingly. Some team members move between levels, perhaps being part of a state-level team for a while, then federal, and then back to state. This flexibility ensures that people making big-picture decisions have input from those closer to the front lines.

Witness how Germany considered putting a brake on some of its reopening measures in response to a substantial, unexpected uptick in COVID-19 infections. Such time-sensitive decisions are not made effectively without a dynamic exchange of ideas and data. Decision makers must quickly adapt to facts reported by subject-area experts and regional officials who have the relevant information and analyses at a given moment….(More)“.

Dynamic Networks Improve Remote Decision-Making

Book by Khaled El Emam, Lucy Mosquera, and Richard Hoptroff: “Building and testing machine learning models requires access to large and diverse data. But where can you find usable datasets without running into privacy issues? This practical book introduces techniques for generating synthetic data—fake data generated from real data—so you can perform secondary analysis to do research, understand customer behaviors, develop new products, or generate new revenue.

Data scientists will learn how synthetic data generation provides a way to make such data broadly available for secondary purposes while addressing many privacy concerns. Analysts will learn the principles and steps for generating synthetic data from real datasets. And business leaders will see how synthetic data can help accelerate time to a product or solution.

This book describes:

  • Steps for generating synthetic data using multivariate normal distributions
  • Methods for distribution fitting covering different goodness-of-fit metrics
  • How to replicate the simple structure of original data
  • An approach for modeling data structure to consider complex relationships
  • Multiple approaches and metrics you can use to assess data utility
  • How analysis performed on real data can be replicated with synthetic data
  • Privacy implications of synthetic data and methods to assess identity disclosure…(More)”.
Practical Synthetic Data Generation: Balancing Privacy and the Broad Availability of Data

About: “The Food Systems Dashboard combines data from multiple sources to give users a complete view of food systems. Users can compare components of food systems across countries and regions. They can also identify and prioritize ways to sustainably improve diets and nutrition in their food systems.

Dashboards are useful tools that help users visualize and understand key information for complex systems. Users can track progress to see if policies or other interventions are working at a country or regional level

In recent years, the public health and nutrition communities have used dashboards to track the progress of health goals and interventions, including the Sustainable Development Goals. To our knowledge, this is the first dashboard that collects country-level data across all components of the food system.

The Dashboard contains over 150 indicators that measure components, drivers, and outcomes of food systems at the country level. As new indicators and data become available, the Dashboard will be updated. Most data used for the Dashboard is open source and available to download directly from the website. Data is pooled from FAO, Euromonitor International, World Bank, and other global and regional data sources….(More)”.

The Food Systems Dashboard

Paper by Amanda J. Porter, Philipp Tuertscher, and Marleen Huysman: “One approach for tackling grand challenges that is gaining traction in recent management literature is robust action: by allowing diverse stakeholders to engage with novel ideas, initiatives can cultivate successful ideas that yield greater impact. However, a potential pitfall of robust action is the length of time it takes to generate momentum. Crowdsourcing, we argue, is a valuable tool that can scale the generation of impact from robust action.

We studied an award‐winning environmental sustainability crowdsourcing initiative and found that robust action principles were indeed successful in attracting a diverse stakeholder network to generate novel ideas and develop these into sustainable solutions. Yet we also observed that the momentum and novelty generated was at risk of getting lost as the actors and their roles changed frequently throughout the process. We show the vital importance of robust action principles for connecting ideas and actors across crowdsourcing phases. These observations allow us to make a contribution to extant theory by explaining the micro‐dynamics of scaling robust action’s impact over time…(More)”.

Saving Our Oceans: Scaling the Impact of Robust Action Through Crowdsourcing

Matthew Hutson at Science: “Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim—and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks—software architectures that loosely mimic the brain. “Fifty papers in,” he says, “it became clear that it wasn’t obvious what the state of the art even was.”

The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly—and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result, presented in March at the Machine Learning and Systems conference, surprised Blalock’s Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. “It’s the old saw, right?” Guttag said. “If you can’t measure something, it’s hard to make it better.”

Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in 2009.” Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since 2006. “There’s always been these waves of hype,” Musgrave says….(More)”.

Eye-catching advances in some AI fields are not real

Book by Hans Hansen: “Texas prosecutors are powerful: in cases where they seek capital punishment, the defendant is sentenced to death over ninety percent of the time. When management professor Hans Hansen joined Texas’s newly formed death penalty defense team to rethink their approach, they faced almost insurmountable odds. Yet while Hansen was working with the office, they won seventy of seventy-one cases by changing the narrative for death penalty defense. To date, they have succeeded in preventing well over one hundred executions—demonstrating the importance of changing the narrative to change our world.

In this book, Hansen offers readers a powerful model for creating significant organizational, social, and institutional change. He unpacks the lessons of the fight to change capital punishment in Texas—juxtaposing life-and-death decisions with the efforts to achieve a cultural shift at Uber. Hansen reveals how narratives shape our everyday lives and how we can construct new narratives to enact positive change. This narrative change model can be used to transform corporate cultures, improve public services, encourage innovation, craft a brand, or even develop your own leadership.

Narrative Change provides an unparalleled window into an innovative model of change while telling powerful stories of a fight against injustice. It reminds us that what matters most for any organization, community, or person is the story we tell about ourselves—and the most effective way to shake things up is by changing the story….(More)”.

Narrative Change: How Changing the Story Can Transform Society, Business, and Ourselves

Turing Institute: “…Policy Priority Inference builds on a behavioural computational model, taking into account the learning process of public officials, coordination problems, incomplete information, and imperfect governmental monitoring mechanisms. The approach is a unique mix of economic theory, behavioural economics, network science and agent-based modelling. The data that feeds the model for a specific country (or a sub-national unit, such as a state) includes measures of the country’s DIs and how they have moved over the years, specified government policy goals in relation to DIs, the quality of government monitoring of expenditure, and the quality of the country’s rule of law.

From these data alone – and, crucially, with no specific information on government expenditure, which is rarely made available – the model can infer the transformative resources a country has historically allocated to transform its SDGs, and assess the importance of SDG interlinkages between DIs. Importantly, it can also reveal where previously hidden inefficiencies lie.

How does it work? The researchers modelled the socioeconomic mechanisms of the policy-making process using agent-computing simulation. They created a simulator featuring an agent called “Government”, which makes decisions about how to allocate public expenditure, and agents called “Bureaucrats”, each of which is essentially a policy-maker linked to a single DI. If a Bureaucrat is allocated some resource, they will use a portion of it to improve their DI, with the rest lost to some degree of inefficiency (in reality, inefficiencies range from simple corruption to poor quality policies and inefficient government departments).

How much resource a Bureaucrat puts towards moving their DI depends on that agent’s experience: if becoming inefficient pays off, they’ll keep doing it. During the process, Government monitors the Bureaucrats, occasionally punishing inefficient ones, who may then improve their behaviour. In the model, a Bureaucrat’s chances of getting caught is linked to the quality of a government’s real-world monitoring of expenditure, and the extent to which they are punished is reflected in the strength of that country’s rule of law.

Diagram of the Policy Priority Inference model
Using data on a country or state’s development indicators and its governance, Policy Priority Inference techniques can model how a government and its policy-makers allocate “transformational resources” to reach their sustainable development goals.

When the historical movements of a country’s DIs are reproduced through the internal workings of the model, the researchers have a powerful proxy for the real-world relationships between government activity, the movement of DIs, and the effects of the interlinkages between DIs, all of which are unique to that country. “Once we can match outcomes, we can discern something that’s going on in reality. But the fact that the method is matching the dynamics of real-world development indicators is just one of multiple ways that we validate our results,” Guerrero notes. This proxy can then be used to project which policy areas should be prioritised in future to best achieve the government’s specified development goals, including predictions of likely timescales.

What’s more, in combination with techniques from evolutionary computation, the model can identify DIs that are linked to large positive spillover effects. These DIs are dubbed “accelerators”. Targeting government resources at such development accelerators fosters not only more rapid results, but also more generalised development…(More)”.

Policy Priority Inference

Article by Elisa Minsart and Vincent Jacquet: “Amidst wide public disillusionment with the institutions of representative democracy, political scientists, campaigners and politicians have intensified efforts to find an effective mechanism to narrow the gap between citizens and those who govern them. One of the most popular remedies in recent years – and one frequently touted as a way to break the Brexit impasse encountered by the UK political class in 2016-19 – is that of citizens’ assemblies. These deliberative forums gather diversified samples of the population, recruited through a process of random selection. Citizens who participate meet experts, deliberate on a specific public issue and make a range of recommendations for policy-making. Citizens’ assemblies are flourishing in many representative democracies – not least in the UK, with the current Climate Assembly UK and Citizens’ Assembly of Scotland. They show that citizens are able to deliberate on complex political issues and to deliver original proposals. 

For several years now, some public leaders, scholars and politicians have sought to integrate these democratic innovations into more traditional political structures. Belgium recently made a step in this direction. Each of Belgium’s three regions has its own parliament, with full legislative powers: on 13 November 2019, a proposition was approved to modify how the Parliament of the Brussels Region operates. The reform mandates the establishment of joint deliberative committees, on which members of the public will serve alongside elected representatives. This will enable ordinary people to deliberate with MPs on preselected themes and to formulate recommendations. The details of the process are currently still being drafted and the first commission is expected to launch at the end of 2020. Despite the COVID-19 crisis, drafting and negotiations with other parties have not been interrupted thanks to an online platform and a videoconference facility.

This experience has been inspired by other initiatives organised in Belgium. In 2011, the G1000 initiative brought together more than 700 randomly selected citizens to debate on different topics. This grassroots experiment attracted lots of public attention. In its aftermath, the different parliaments of the country launched their own citizens’ assemblies, designed to tackle specific local issues. Some international experiences also inspired the Brussels Region, in particular the first Irish Constitutional Convention (2012–2014). This assembly was composed of both elected representatives and randomly selected citizens, and led directly to a referendum that approved the legalisation of same-sex marriage. However, the present joint committees go well beyond these initiatives. Whereas both of these predecessors were ad hoc initiatives designed to resolve particular problems, the Brussels committees will be permanent and hosted at the heart of the parliament. Both of these aspects make the new committees a major innovation and entirely different from the predecessors that helped inspire them…(More)”.

Permanent joint committees in Belgium: involving citizens in parliamentary debate

Zeger van der Wal and Mehmet Demircioglu in the Australian Journal of Public Administration (AJPA): “Are ethical public organisations more likely to realize innovation? The public administration literature is ambiguous about this relationship, with evidence being largely anecdotal and focused mainly on the ethical implications of business‐like behaviour and positive deviance, rather than how ethical behaviour and culture may contribute to innovation.

In this paper we examine the effects of ethical culture and ethical leadership on reported realized innovation, using 2017 survey data from the Australia Public Service Commission ( = 80,316). Our findings show that both ethical culture at the working group‐level and agency‐level as well as ethical leadership have significant positive associations with realized innovation in working groups. The findings are robust across agency, work location, job level, tenure, education, and gender and across different samples. We conclude our paper with theoretical and practical implications of our research findings…(More)”.

More ethical, more innovative? The effects of ethical culture and ethical leadership on realized innovation

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday