Stefaan Verhulst
Paper by Liav Orgad and Wessel Reijers: “The COVID19 crisis has triggered a new wave of digitalization of the lives of citizens. To counter the devastating effects of the virus, states and corporations are experimenting with systems that trace citizens as an integral part of public life. In China, a comprehensive sociotechnical system of citizenship governance has already in force with the implementation of the Social Credit System—a technology-driven project that aims to assess, evaluate, and steer the behavior of Chinese citizens.
After presenting social credit systems in China’s public and private sectors (Part I), the article provides normative standards to distinguish the Chinese system from comparable Western systems (Part II). It then shows the manner in which civic virtue is instrumentalized in China, both in content (“what” it is) and in form (“how” to cultivate it) (Part III), and claims that social credit systems represent a new form of citizenship governance, “cybernetic citizenship,” which implements different conceptions of state power, civic virtue, and human rights (Part V). On the whole, the article demonstrates how the Chinese Social Credit System redefines the institution of citizenship and warns against similar patterns that are mushrooming in the West.
The article makes three contributions: empirically, it presents China’s Social Credit Systems and reveals their data sources, criteria used, rating methods, and attached sanctions and rewards. Comparatively, it shows that, paradoxically, China’s Social Credit System is not fundamentally different than credit systems in Western societies, yet indicates four points of divergence: scope, authority, regulation, and regime. Normatively, it claims that China’s Social Credit System creates a form of cybernetic citizenship governance, which redefines the essence of citizenship….(More)”
Blog by Eddie Copeland: “…how might we think about exploring the Amplify box in the diagram above? I’d suggest three approaches are likely to emerge:
Let’s discuss these in the context of data.
Specific Fixes — A number of urgent data requests have arisen during Covid where it’s been apparent that councils simply don’t have the data they need. One example is how local authorities have needed to distribute business support grants. Many have discovered that while they have good records of local companies on their business rates database, they lack email or bank details for the majority. That makes it incredibly difficult to get payments out promptly. We can and should fix specific issues like this and ensure councils have those details in future.
New Opportunities — A crisis also prompts us to think about how things could be done differently and better. Perhaps the single greatest new opportunity we could aim to realise on a data front would be shifting from static to dynamic (if not real-time) data on a greater range of issues. As public sector staff, from CEOs to front line workers, have sought to respond to the crisis, the limitations of relying on static weekly, monthly or annual figures have been laid bare. As factors such as transport usage, high street activity and use of public spaces become deeply important in understanding the nature of recovery, more dynamic data could make a real difference.
Generic Capabilities — While the first two categories of activity are worth pursuing, I’d argue the single most positive legacy that could come out of a crisis is that we put in place generic capabilities — core foundation stones — that make us better able to respond to whatever comes next. Some of those capabilities will be about what individual councils need to have in place to use data well. However, given that few crises respect local authority boundaries, arguably the most important set of capabilities concern how different organisations can collaborate with data.
Putting in place the foundation stones for data collaboration
For years there has been discussion about the factors that make data collaboration between different public sector bodies hard.
Five stand out.
- Technology — some technologies make it hard to get the data out (e.g. lack of APIs); worse, some suppliers charge councils to access their own data.
- Data standards — the use of different standards, formats and conventions for recording data, and the lack of common identifiers like Unique Property Reference Numbers (UPRNs) makes it hard to compare, link or match records.
- Information Governance (IG) — Ensuring that London’s public sector organisations can use data in a way that’s legal, ethical and secure — in short, worthy of citizens’ trust and confidence — is key. Yet councils’ different approaches to IG can make the process take a long time — sometimes months.
- Ways of working — councils’ different processes require and produce different data.
- Lack of skills — when data skills are at a premium, councils understandably need staff with data competencies to work predominantly on internal projects, with little time available for collaboration.
There’s a host of reasons why progress to resolve these barriers has been slow. But perhaps the greatest is the perception that the effort required to address them is greater than the reward of doing so…(More)” –
See also Call For Action here
Paper by Oren Bar-Gill and Omri Ben-Shahar: “Policymakers and scholars – both lawyers and economists – have long been pondering the optimal design of default rules. From the classic works on “mimicking” defaults for contracts and corporations to the modern rush to set “sticky” default rules to promote policies as diverse as organ donations, retirement savings, consumer protection, and data privacy, the optimal design of default rules has featured as a central regulatory challenge. The key element driving the design is opt-out costs—how to minimize them, or alternatively how to raise them to make the default sticky. Much of the literature has focused on “mechanical” opt-out costs—the effort people incur to select a non-default alternative. This focus is too narrow. A more important factor affecting opt-out is information—the knowledge people must acquire to make informed opt-out decisions. But, unlike high mechanical costs, high information costs need not make defaults stickier; they may instead make the defaults “slippery.”
This counterintuitive claim is due to the phenomenon of uninformed opt-out, which we identify and characterize. Indeed, the importance of uninformed opt-out requires a reassessment of the conventional wisdom about Nudge and asymmetric or libertarian paternalism. We also show that different defaults provide different incentives to acquire the information necessary for informed optout. With the ballooning use of default rules as a policy tool, our information-costs theory provides valuable guidance to policymakers….(More)”.
Paper by Sina F. Ardabili et al: “Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved.
This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models….(More)”.
Blogpost by Ania Calderon: “The rapid spread of this disease is exposing fault lines in our political and social balance — most visibly in the lack of protection for the poorest or investment in healthcare systems. It’s also forcing us to think about how we can work across jurisdictions and political contexts to foster better collaboration, build trust in institutions, and save lives.
As we said recently in a call for Open COVID-19 Data, governments need data from other countries to model and flatten the curve, but there is little consistency in how they gather it. Meanwhile, the consequences of different approaches show the balance required in effectively implementing open data policies. For example, Singapore has published detailed personal data about every coronavirus patient, including where they work and live and whether they had contact with others. This helped the city-state keep its infection and death rates extremely low in the early stages of the epidemic, but also led to proportionality concerns as people might be targeted and harmed.
Overall, few governments are publishing the information on which they are basing these huge decisions. This makes it hard to collaborate, scrutinise, and build trust. For example, the models can only be as good as the data that feed them, and we need to understand their limitations. Opening up the data and the source code behind them would give citizens confidence that officials were making decisions in the public’s interest rather than their political ones. It would also foster the international joined-up action needed to meet this challenge. And it would allow non-state actors into the process to plug gaps and deliver and scale effective solutions quickly.
At the same time, legitimate concerns have been raised about how this data is used, both now and in the future.
As we say in our strategy, openness needs to be balanced with both individual and collective data rights, and policies need to account for context.
People may be ok to give up some of their privacy — like having their movements tracked by government smartphone apps — if that can help combat a global health crisis, but that would seem an unthinkable invasion of privacy to many in less exceptional times. We rightly worry how this data might be used later on, and by whom. Which shows that data systems need to be able to respond to changing times, while holding fundamental human rights and civil liberties in check.
As with so many things, this crisis is forcing the world to question orthodoxies around individual and collective data rights and needs. It shines a light on policies and approaches which might help avoid future disasters and build a fairer, healthier, more collaborative society overall….(More)”.
Paper by Mario Schultz and Peter Seele: “Building on an illustrative case of a systemic environmental threat and its multi‐stakeholder response, this paper draws attention to the changing political impacts of corporations in the digital age. Political Corporate Social Responsibility (PCSR) theory suggests an expanded sense of politics and corporations, including impacts that may range from voluntary initiatives to overcome governance gaps, to avoiding state regulation via corporate political activity. Considering digitalization as a stimulus, we explore potential responsibilities of corporations toward public goods in contexts with functioning governments. We show that digitalization—in the form of transparency, surveillance, and data‐sharing—offers corporations’ scope for deliberative public participation.
The starry sky beetle infestation endangering public and private goods is thereby used to illustrate the possibility of expanding the political role of corporations in the digital sphere. We offer a contribution by conceptualizing data‐deliberation as a Habermasian variation of PCSR, defined as the (a) voluntary disclosure of corporate data and its transparent, open sharing with the public sector (b) along with the cooperation with governmental institutions on data analytics methods for examining large‐scale datasets (c) thereby complying with existing national and international regulations on data protection, in particular with respect to privacy and personal data….(More)”.
Karen Hao at MIT Technology Review: “The pandemic, in other words, has turned into a gateway for AI adoption in health care—bringing both opportunity and risk. On the one hand, it is pushing doctors and hospitals to fast-track promising new technologies. On the other, this accelerated process could allow unvetted tools to bypass regulatory processes, putting patients in harm’s way.
“At a high level, artificial intelligence in health care is very exciting,” says Chris Longhurst, the chief information officer at UC San Diego Health. “But health care is one of those industries where there are a lot of factors that come into play. A change in the system can have potentially fatal unintended consequences.”
Before the pandemic, health-care AI was already a booming area of research. Deep learning, in particular, has demonstrated impressive results for analyzing medical images to identify diseases like breast and lung cancer or glaucoma at least as accurately as human specialists. Studies have also shown the potential of using computer vision to monitor elderly people in their homes and patients in intensive care units.
But there have been significant obstacles to translating that research into real-world applications. Privacy concerns make it challenging to collect enough data for training algorithms; issues related to bias and generalizability make regulators cautious to grant approvals. Even for applications that do get certified, hospitals rightly have their own intensive vetting procedures and established protocols. “Physicians, like everybody else—we’re all creatures of habit,” says Albert Hsiao, a radiologist at UCSD Health who is now trialing his own covid detection algorithm based on chest x-rays. “We don’t change unless we’re forced to change.”
As a result, AI has been slow to gain a foothold. “It feels like there’s something there; there are a lot of papers that show a lot of promise,” said Andrew Ng, a leading AI practitioner, in a recent webinar on its applications in medicine. But “it’s not yet as widely deployed as we wish.”…
In addition to the speed of evaluation, Durand identifies something else that may have encouraged hospitals to adopt AI during the pandemic: they are thinking about how to prepare for the inevitable staff shortages that will arise after the crisis. Traumatic events like a pandemic are often followed by an exodus of doctors and nurses. “Some doctors may want to change their way of life,” he says. “What’s coming, we don’t know.”…(More)”
Andy Haldane at the Financial Times: “Yet one source of capital, as in past pandemics, is bucking these trends: social capital. This typically refers to the network of relationships across communities that support and strengthen societies. From surveys, we know that people greatly value these networks, even though social capital itself is rarely assigned a monetary value.
The social distancing policies enacted across the world to curb the spread of Covid-19 might have been expected to weaken social networks and damage social capital. In fact, the opposite has happened. People have maintained physical distance while pursuing social togetherness. Existing networks have been strengthened and new ones created, often digitally. Even as other capital has crumbled, the stock of social capital has risen, acting as a countercyclical stabiliser across communities. We see this daily on our doorsteps through small acts of neighbourly kindness.
We see it in the activities of community groups, charities and philanthropic movements, whose work has risen in importance and prominence. And we see it too in the vastly increased numbers of people volunteering to help. Before the crisis struck, the global volunteer corps numbered a staggering 1bn people. Since then, more people than ever have signed up for civic service, including 750,000 volunteers who are supporting the UK National Health Service. They are the often-invisible army helping fight this invisible enemy.
This same pattern appeared during past periods of societal stress, from pandemics to wars. Then, as now, faith and community groups provided the glue bonding societies together. During the 19th century, the societal stresses arising from the Industrial Revolution — homelessness, family separation, loneliness — were the catalyst for the emergence of the charitable sector.
The economic and social progress that followed the Industrial Revolution came courtesy of a three-way partnership among the private, public and social sectors. The private sector provided the innovative spark; the state provided insurance to the incomes, jobs and health of citizens; and the social sector provided the support network to cope with disruption to lives and livelihoods. Back then, social capital (every bit as much as human, financial and physical capital) provided the foundations on which capitalism was built….(More)”.
James Temple at MIT Technology Review: “…A crucial point of the work—which Steinhardt and MIT’s Andrew Ilyas wrote up in a draft paper that hasn’t yet been published or peer-reviewed—is that communities need to get much better at tracking infections. “With the data we currently have, we actually just don’t know what the level of safe mobility is,” Steinhardt says. “We need much better mechanisms for tracking prevalence in order to do any of this safely.”
The analysis relies on other noisy and less-than-optimal measurements as well, including using hospitalization admissions and deaths to estimate disease prevalence before the lockdowns. They also had to make informed assumptions, which others might disagree with, about how much shelter-in-place rules have altered the spread of the disease. Much of the overall uncertainty is due to the spottiness of testing to date. If case counts are rising, but so is testing, it’s difficult to decipher whether infections are still increasing or a greater proportion of infected people are being evaluated.
This produces some confusing results in the study for any policymaker looking for clear direction. Notably, in Los Angeles, the estimated growth rate of the disease since the shelter-in-place order went into effect ranges from negative to positive. This suggests either that the city could start loosening restrictions or that it needs to tighten them further.
Ultimately, the researchers stress that communities need to build up disease surveillance measures to reduce this uncertainty, and strike an appropriate balance between reopening the economy and minimizing public health risks.
They propose several ways to do so, including conducting virological testing on a random sample of some 20,000 people per day in a given area; setting up wide-scale online surveys that ask people to report potential symptoms, similar to what Carnegie Mellon researchers are doing through efforts with both Facebook and Google; and potentially testing for the prevalence of viral material in wastewater, a technique that has “sounded the alarm” on polio outbreaks in the past.
A team of researchers affiliated with MIT, Harvard, and startup Biobot Analytics recently analyzed water samples from a Massachusetts treatment facility, and detected levels of the coronavirus that were “significantly higher” than expected on the basis of confirmed cases in the state, according to a non-peer-reviewed paper released earlier this month….(More)”.
Marcus Fairs at DeZeen: “Hospitals “desperately need designers” to improve everything from the way they tackle coronavirus to the layout of operating theatres and the design of medical charts, according to a senior US doctor.
“We desperately need designers to help organize the environment and products to help keep the correct focus on a patient, and reduce distraction,” said Dr Sam Smith, a clinical physician at Massachusetts General Hospital in Boston.
“We need designers at every turn, but they are so infrequently consulted,” he added. “In the end, most physicians burn out early because, in part, we are lacking well designed cognitive and physical spaces to help process the information smoothly.”…
“Visual hierarchy is a huge problem in medicine,” Smith said, giving an example. “This is very evident in online medical charts. Very poor visual hierarchy exists because designers were not consulted in the platform or details of the patient information organization or presentation.”
“This inability to incorporate good visual hierarchy, for example organizing a complex medical history in a visual way to emphasize what really needs attention for the patient, has led to ineffective care, and even patient harm on occasions over the years,” he explained.
“I have seen it in my 20 years of practice time and time again. Doctors are humans too, and the demands on them processing huge amounts of information are high.”…(More)”.