Explore our articles
View All Results

Stefaan Verhulst

Kayte Spector-Bagdady et al at the New England Journal of Medicine: “The advent of standardized electronic health records, sustainable biobanks, consumer-wellness applications, and advanced diagnostics has resulted in new health information repositories. As highlighted by the Covid-19 pandemic, these repositories create an opportunity for advancing health research by means of secondary use of data and biospecimens. Current regulations in this space give substantial discretion to individual organizations when it comes to sharing deidentified data and specimens. But some recent examples of health care institutions sharing individual-level data and specimens with companies have generated controversy. Academic medical centers are therefore both practically and ethically compelled to establish best practices for governing the sharing of such contributions with outside entities.1 We believe that the approach we have taken at Michigan Medicine could help inform the national conversation on this issue.

The Federal Policy for the Protection of Human Subjects offers some safeguards for research participants from whom data and specimens have been collected. For example, researchers must notify participants if commercial use of their specimens is a possibility. These regulations generally cover only federally funded work, however, and they don’t apply to deidentified data or specimens. Because participants value transparency regarding industry access to their data and biospecimens, our institution set out to create standards that would better reflect participants’ expectations and honor their trust. Using a principlist approach that balances beneficence and nonmaleficence, respect for persons, and justice, buttressed by recent analyses and findings regarding contributors’ preferences, Michigan Medicine established a formal process to guide our approach….(More)”.

Sharing Health Data and Biospecimens with Industry — A Principle-Driven, Practical Approach

Report on General and Child-specific Ethical Issues by Gabrielle Berman, Karen Carter, Manuel García-Herranz and Vedran Sekara: “The last few years have seen a proliferation of means and approaches being used to collect sensitive or identifiable data on children. Technologies such as facial recognition and other biometrics, increased processing capacity for ‘big data’ analysis and data linkage, and the roll-out of mobile and internet services and access have substantially changed the nature of data collection, analysis, and use.

Real-time data are essential to support decision-makers in government, development and humanitarian agencies such as UNICEF to better understand the issues facing children, plan appropriate action, monitor progress and ensure that no one is left behind. But the collation and use of personally identifiable data may also pose significant risks to children’s rights.

UNICEF has undertaken substantial work to provide a foundation to understand and balance the potential benefits and risks to children of data collection. This work includes the Industry Toolkit on Children’s Online Privacy and Freedom of Expression and a partnership with GovLab on Responsible Data for Children (RD4C) – which promotes good practice principles and has developed practical tools to assist field offices, partners and governments to make responsible data management decisions.

Balancing the need to collect data to support good decision-making versus the need to protect children from harm created through the collection of the data has never been more challenging than in the context of the global COVID-19 pandemic. The response to the pandemic has seen an unprecedented rapid scaling up of technologies to support digital contact tracing and surveillance. The initial approach has included:

  • tracking using mobile phones and other digital devices (tablet computers, the Internet of Things, etc.)
  • surveillance to support movement restrictions, including through the use of location monitoring and facial recognition
  • a shift from in-person service provision and routine data collection to the use of remote or online platforms (including new processes for identity verification)
  • an increased focus on big data analysis and predictive modelling to fill data gaps…(More)”.
Digital contact tracing and surveillance during COVID-19

Andrew Jack at the Financial Times: “When Mozambique was hit by two cyclones in rapid succession last year — causing death and destruction from a natural disaster on a scale not seen in Africa for a generation — government officials added an unusual recruit to their relief efforts. Apart from the usual humanitarian and health agencies, the National Health Institute also turned to Zenysis, a Silicon Valley start-up.

As the UN and non-governmental organisations helped to rebuild lives and tackle outbreaks of disease including cholera, Zenysis began gathering and analysing large volumes of disparate data. “When we arrived, there were 400 new cases of cholera a day and they were doubling every 24 hours,” says Jonathan Stambolis, the company’s chief executive. “None of the data was shared [between agencies]. Our software harmonised and integrated fragmented sources to produce a coherent picture of the outbreak, the health system’s ability to respond and the resources available.

“Three and a half weeks later, they were able to get infections down to zero in most affected provinces,” he adds. The government attributed that achievement to the availability of high-quality data to brief the public and international partners.

“They co-ordinated the response in a way that drove infections down,” he says. Zenysis formed part of a “virtual control room”, integrating information to help decision makers understand what was happening in the worst hit areas, identify sources of water contamination and where to prioritise cholera vaccinations.

It supported an “mAlert system”, which integrated health surveillance data into a single platform for analysis. The output was daily reports distilled from data issued by health facilities and accommodation centres in affected areas, disease monitoring and surveillance from laboratory testing….(More)”.

How data analysis helped Mozambique stem a cholera outbreak

Article by Abdullah Almaatouq and Alex “Sandy” Pentland: “The idea of collective intelligence is not new. Research has long shown that in a wide range of settings, groups of people working together outperform individuals toiling alone. But how do drastic shifts in circumstances, such as people working mostly at a distance during the COVID-19 pandemic, affect the quality of collective decision-making? After all, public health decisions can be a matter of life and death, and business decisions in crisis periods can have lasting effects on the economy.

During a crisis, it’s crucial to manage the flow of ideas deliberatively and strategically so that communication pathways and decision-making are optimized. Our recently published research shows that optimal communication networks can emerge from within an organization when decision makers interact dynamically and receive frequent performance feedback. The results have practical implications for effective decision-making in times of dramatic change….

Our experiments illustrate the importance of dynamically configuring network structures and enabling decision makers to obtain useful, recurring feedback. But how do you apply such findings to real-world decision-making, whether remote or face to face, when constrained by a worldwide pandemic? In such an environment, connections among individuals, teams, and networks of teams must be continually reorganized in response to shifting circumstances and challenges. No single network structure is optimal for every decision, a fact that is clear in a variety of organizational contexts.

Public sector. Consider the teams of advisers working with governments in creating guidelines to flatten the curve and help restart national economies. The teams are frequently reconfigured to leverage pertinent expertise and integrate data from many domains. They get timely feedback on how decisions affect daily realities (rates of infection, hospitalization, death) — and then adjust recommended public health protocols accordingly. Some team members move between levels, perhaps being part of a state-level team for a while, then federal, and then back to state. This flexibility ensures that people making big-picture decisions have input from those closer to the front lines.

Witness how Germany considered putting a brake on some of its reopening measures in response to a substantial, unexpected uptick in COVID-19 infections. Such time-sensitive decisions are not made effectively without a dynamic exchange of ideas and data. Decision makers must quickly adapt to facts reported by subject-area experts and regional officials who have the relevant information and analyses at a given moment….(More)“.

Dynamic Networks Improve Remote Decision-Making

Book by Khaled El Emam, Lucy Mosquera, and Richard Hoptroff: “Building and testing machine learning models requires access to large and diverse data. But where can you find usable datasets without running into privacy issues? This practical book introduces techniques for generating synthetic data—fake data generated from real data—so you can perform secondary analysis to do research, understand customer behaviors, develop new products, or generate new revenue.

Data scientists will learn how synthetic data generation provides a way to make such data broadly available for secondary purposes while addressing many privacy concerns. Analysts will learn the principles and steps for generating synthetic data from real datasets. And business leaders will see how synthetic data can help accelerate time to a product or solution.

This book describes:

  • Steps for generating synthetic data using multivariate normal distributions
  • Methods for distribution fitting covering different goodness-of-fit metrics
  • How to replicate the simple structure of original data
  • An approach for modeling data structure to consider complex relationships
  • Multiple approaches and metrics you can use to assess data utility
  • How analysis performed on real data can be replicated with synthetic data
  • Privacy implications of synthetic data and methods to assess identity disclosure…(More)”.
Practical Synthetic Data Generation: Balancing Privacy and the Broad Availability of Data

About: “The Food Systems Dashboard combines data from multiple sources to give users a complete view of food systems. Users can compare components of food systems across countries and regions. They can also identify and prioritize ways to sustainably improve diets and nutrition in their food systems.

Dashboards are useful tools that help users visualize and understand key information for complex systems. Users can track progress to see if policies or other interventions are working at a country or regional level

In recent years, the public health and nutrition communities have used dashboards to track the progress of health goals and interventions, including the Sustainable Development Goals. To our knowledge, this is the first dashboard that collects country-level data across all components of the food system.

The Dashboard contains over 150 indicators that measure components, drivers, and outcomes of food systems at the country level. As new indicators and data become available, the Dashboard will be updated. Most data used for the Dashboard is open source and available to download directly from the website. Data is pooled from FAO, Euromonitor International, World Bank, and other global and regional data sources….(More)”.

The Food Systems Dashboard

Paper by Amanda J. Porter, Philipp Tuertscher, and Marleen Huysman: “One approach for tackling grand challenges that is gaining traction in recent management literature is robust action: by allowing diverse stakeholders to engage with novel ideas, initiatives can cultivate successful ideas that yield greater impact. However, a potential pitfall of robust action is the length of time it takes to generate momentum. Crowdsourcing, we argue, is a valuable tool that can scale the generation of impact from robust action.

We studied an award‐winning environmental sustainability crowdsourcing initiative and found that robust action principles were indeed successful in attracting a diverse stakeholder network to generate novel ideas and develop these into sustainable solutions. Yet we also observed that the momentum and novelty generated was at risk of getting lost as the actors and their roles changed frequently throughout the process. We show the vital importance of robust action principles for connecting ideas and actors across crowdsourcing phases. These observations allow us to make a contribution to extant theory by explaining the micro‐dynamics of scaling robust action’s impact over time…(More)”.

Saving Our Oceans: Scaling the Impact of Robust Action Through Crowdsourcing

Matthew Hutson at Science: “Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim—and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks—software architectures that loosely mimic the brain. “Fifty papers in,” he says, “it became clear that it wasn’t obvious what the state of the art even was.”

The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly—and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result, presented in March at the Machine Learning and Systems conference, surprised Blalock’s Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. “It’s the old saw, right?” Guttag said. “If you can’t measure something, it’s hard to make it better.”

Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in 2009.” Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since 2006. “There’s always been these waves of hype,” Musgrave says….(More)”.

Eye-catching advances in some AI fields are not real

Book by Hans Hansen: “Texas prosecutors are powerful: in cases where they seek capital punishment, the defendant is sentenced to death over ninety percent of the time. When management professor Hans Hansen joined Texas’s newly formed death penalty defense team to rethink their approach, they faced almost insurmountable odds. Yet while Hansen was working with the office, they won seventy of seventy-one cases by changing the narrative for death penalty defense. To date, they have succeeded in preventing well over one hundred executions—demonstrating the importance of changing the narrative to change our world.

In this book, Hansen offers readers a powerful model for creating significant organizational, social, and institutional change. He unpacks the lessons of the fight to change capital punishment in Texas—juxtaposing life-and-death decisions with the efforts to achieve a cultural shift at Uber. Hansen reveals how narratives shape our everyday lives and how we can construct new narratives to enact positive change. This narrative change model can be used to transform corporate cultures, improve public services, encourage innovation, craft a brand, or even develop your own leadership.

Narrative Change provides an unparalleled window into an innovative model of change while telling powerful stories of a fight against injustice. It reminds us that what matters most for any organization, community, or person is the story we tell about ourselves—and the most effective way to shake things up is by changing the story….(More)”.

Narrative Change: How Changing the Story Can Transform Society, Business, and Ourselves

Turing Institute: “…Policy Priority Inference builds on a behavioural computational model, taking into account the learning process of public officials, coordination problems, incomplete information, and imperfect governmental monitoring mechanisms. The approach is a unique mix of economic theory, behavioural economics, network science and agent-based modelling. The data that feeds the model for a specific country (or a sub-national unit, such as a state) includes measures of the country’s DIs and how they have moved over the years, specified government policy goals in relation to DIs, the quality of government monitoring of expenditure, and the quality of the country’s rule of law.

From these data alone – and, crucially, with no specific information on government expenditure, which is rarely made available – the model can infer the transformative resources a country has historically allocated to transform its SDGs, and assess the importance of SDG interlinkages between DIs. Importantly, it can also reveal where previously hidden inefficiencies lie.

How does it work? The researchers modelled the socioeconomic mechanisms of the policy-making process using agent-computing simulation. They created a simulator featuring an agent called “Government”, which makes decisions about how to allocate public expenditure, and agents called “Bureaucrats”, each of which is essentially a policy-maker linked to a single DI. If a Bureaucrat is allocated some resource, they will use a portion of it to improve their DI, with the rest lost to some degree of inefficiency (in reality, inefficiencies range from simple corruption to poor quality policies and inefficient government departments).

How much resource a Bureaucrat puts towards moving their DI depends on that agent’s experience: if becoming inefficient pays off, they’ll keep doing it. During the process, Government monitors the Bureaucrats, occasionally punishing inefficient ones, who may then improve their behaviour. In the model, a Bureaucrat’s chances of getting caught is linked to the quality of a government’s real-world monitoring of expenditure, and the extent to which they are punished is reflected in the strength of that country’s rule of law.

Diagram of the Policy Priority Inference model
Using data on a country or state’s development indicators and its governance, Policy Priority Inference techniques can model how a government and its policy-makers allocate “transformational resources” to reach their sustainable development goals.

When the historical movements of a country’s DIs are reproduced through the internal workings of the model, the researchers have a powerful proxy for the real-world relationships between government activity, the movement of DIs, and the effects of the interlinkages between DIs, all of which are unique to that country. “Once we can match outcomes, we can discern something that’s going on in reality. But the fact that the method is matching the dynamics of real-world development indicators is just one of multiple ways that we validate our results,” Guerrero notes. This proxy can then be used to project which policy areas should be prioritised in future to best achieve the government’s specified development goals, including predictions of likely timescales.

What’s more, in combination with techniques from evolutionary computation, the model can identify DIs that are linked to large positive spillover effects. These DIs are dubbed “accelerators”. Targeting government resources at such development accelerators fosters not only more rapid results, but also more generalised development…(More)”.

Policy Priority Inference

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday