Stefaan Verhulst
Paper by Henry Chesbrough: “Covid-19 has severely tested our public health systems. Recovering from Covid-19 will soon test our economic systems. Innovation will have an important role to play in recovering from the aftermath of the coronavirus. This article discusses both how to manage innovation as part of that recovery, and also derives some lessons from how we have responded to the virus so far, and what those lessons imply for managing innovation during the recovery.
Covid-19’s assault has prompted a number of encouraging developments. One development has been the rapid mobilization of scientists, pharmaceutical companies and government officials to launch a variety of scientific initiatives to find an effective response to the virus. As of the time of this writing, there are tests underway of more than 50 different compounds as possible vaccines against the virus.1 Most of these will ultimately fail, but the severity of the crisis demands that we investigate every plausible candidate. We need rapid, parallel experimentation, and it must be the test data that select our vaccine, not internal political or bureaucratic processes.
A second development has been the release of copious amounts of information about the virus, its spread, and human responses to various public health measures. The Gates Foundation, working with the Chan-Zuckerberg Foundation and the White House Office of Science and Technology Policy have joined forces to publish all of the known medical literature on the coronavirus, in machine-readable form. This was done with the intent to accelerate the analysis of the existing research to identify possible new avenues of attack against Covid-19. The coronavirus itself was synthesized early on in the outbreak by scientists in China, providing the genetic sequence of the virus, and showing where it differed from earlier viruses such as SARS and MERS. This data was immediately shared widely with scientists and researchers around the world. At the same time, GITHUB and the Humanitarian Data Exchange each have an accumulating series of datasets on the geography of the spread of the disease (including positive test cases, hospitalizations, and deaths).
What these developments have in common is openness. In fighting a pandemic, speed is crucial, and the sooner we know more and are able to take action, the better for all of us. Opening up mobilizes knowledge from many different places, causing our learning to advance and our progress against the disease to accelerate. Openness unleashes a volunteer army of researchers, working in their own facilities, across different time zones, and different countries. Openness leverages the human capital available in the world to tackle the disease, and also accesses the physical capital (such as plant and equipment) already in place to launch rapid testing of possible solutions. This openness corresponds well to an academic body of work called open innovation (Chesbrough, 2003; Chesbrough, 2019).
Innovation is often analyzed in terms of costs, and the question of whether to “make or buy” often rests on which approach costs less. But in a pandemic, time is so valuable and essential, that the question of costs is far less important than the ability to get to a solution sooner. The Covid-19 disease appears to be doubling every 3–5 days, so a delay of just a few weeks in the search for a new vaccine (they normally take 1–2 years to develop, or more) might witness multiple doublings of size of the population infected with the disease. It is for this reason that Bill Gates is providing funds to construct facilities in advance for producing the leading vaccine candidates. Though the facilities for the losing candidates will not be used, it will save precious time to make the winning vaccine in high volume, once it is found.
Open innovation can help speed things up….(More)”.
Paper by Daniel Goldstein and Johannes Wiedemann: “To combat the novel coronavirus, there must be relatively uniform implementation of preventative measures, e.g., social distancing and stay-at-home orders, in order to minimize continued spread. We analyze cellphone mobility data to measure county-level compliance with these critical public health policies. Leveraging staggered roll-out, we estimate the causal effect of stay-at-home orders on mobility using a difference-in-differences strategy, which we find to have significantly curtailed movement.
However, examination of descriptive heterogeneous effects suggests the critical role that several sociopolitical attributes hold for producing asymmetrical compliance across society. We examine measures of partisanship, partisan identity being shared with government leaders, and trust in government (measured by the proxies of voter turnout and social capital). We find that Republican counties comply less, but comply relatively more when directives are given by co-partisan leaders, suggesting citizens are more trusting in the authority of co-partisans. Furthermore, our proxy measures suggest that trust in government increases overall compliance. However, when trust (as measured by social capital) is interacted with county-level partisanship, which we interpret as community-level trust, we find that trust amplifies compliance or noncompliance, depending upon the prevailing community sentiment.
We argue that these results align with a theory of public policy compliance in which individual behavior is informed by one’s level of trust in the experts who craft policy and one’s trust in those who implement it, i.e., politicians and bureaucrats. Moreover, this evaluation is amplified by local community sentiments. Our results are supportive of this theory and provide a measure of the real-world importance of trust in government to citizen welfare. Moreover, our results illustrate the role that political polarization plays in creating asymmetrical compliance with mitigation policies, an outcome that may prove severely detrimental to successful containment of the COVID-19 pandemic….(More)”.
David Matthews at THE: “In contrast to other countries, philosophers, historians, theologians and jurists have played a major role advising the state as it seeks to loosen restrictions…
In the struggle against the new coronavirus, humanities academics have entered the fray – in Germany at least.
Arguably to a greater extent than has happened in the UK, France or the US, the country has enlisted the advice of philosophers, historians of science, theologians and jurists as it navigates the delicate ethical balancing act of reopening society while safeguarding the health of the public.
When the German federal government announced a slight loosening of restrictions on 15 April – allowing small shops to open and some children to return to school in May – it had been eagerly awaiting a report written by a 26-strong expert group containing only a minority of natural scientists and barely a handful of virologists and medical specialists.
Instead, this working group from the Leopoldina – Germany’s independent National Academy of Sciences dating back to 1652 – included historians of industrialisation and early Christianity, a specialist on the philosophy of law and several pedagogical experts.
This paucity of virologists earned the group a swipe from Markus Söder, minister-president of badly hit Bavaria, who has led calls in Germany for a tough lockdown (although earlier in the pandemic the Leopoldina did release a report written by more medically focused specialists).
But “the crisis is a complex one, it’s a systemic crisis” and so it needs to be dissected from every angle, argued Jürgen Renn, director of the Max Planck Institute for the History of Science, and one of those who wrote the crucial recommendations.
And Professor Renn – who earlier this year published a book on rethinking science in the Anthropocene – made the argument for green post-virus reconstruction. Urbanisation and deforestation have squashed mankind and wildlife together, making other animal-to-human disease transmissions ever more likely, he argued. “It’s not the only virus waiting out there,” he said.
Germany’s Ethics Council – which traces its roots back to the stem cell debates of the early 2000s and is composed of theologians, jurists, philosophers and other ethical thinkers – also contributed to a report at the end of March, warning that it was up to elected politicians, not scientists, to make the “painful decisions” weighing up the lockdown’s effect on health and its other side-effects….(More)“.
“A rapid evidence review of the technical considerations and societal implications of using technology to transition from the COVID-19 crisis” by the Ada Lovelace Institute: “The review focuses on three technologies in particular: digital contact tracing, symptom tracking apps and immunity certification. It makes pragmatic recommendations to support well-informed policymaking in response to the crisis. It is informed by the input of more than twenty experts drawn from across a wide range of domains, including technology, policy, human rights and data protection, public health and clinical medicine, behavioural science and information systems, philosophy, sociology and anthropology.
The purpose of this review is to open up, rather than close down, an informed and public dialogue on the technical considerations and societal implications of the use of technology to transition from the crisis.
Key findings
There is an absence of evidence to support the immediate national deployment of symptom tracking applications, digital contact tracing applications and digital immunity certificates. While the Government is right to explore non-clinical measures for transition, for national policy to rely on these apps, they would need to be able to:
- Represent accurate information about infection or immunity
- Demonstrate technical capabilities to support required functions
- Address various practical issues for use, including meeting legal tests
- Mitigate social risks and protect against exacerbating inequalities and vulnerabilities
At present the evidence does not demonstrate that tools are able to address these four components adequately. We offer detailed evidence, and recommendations for each application in the report summary.
In particular, we recommend that:
- Effective deployment of technology to support the transition from the crisis will be contingent on public trust and confidence, which can be strengthened through the establishment of two accountability mechanisms:
- the Group of Advisors on Technology in Emergencies (GATE) to review evidence, advise on design and oversee implementation, similar to the expert group recently established by Canada’s Chief Science Adviser; and
- an independent oversight mechanism to conduct real-time scrutiny of policy formulation.
- Clear and comprehensive primary legislation should be advanced to regulate data processing in symptom tracking and digital contact tracing applications. Legislation should impose strict purpose, access and time limitations…(More)”.
Hunton Privacy Blog: “On April 21, 2020, the European Data Protection Board (“EDPB”) adopted Guidelines on the processing of health data for scientific purposes in the context of the COVID-19 pandemic. The aim of the Guidelines is to provide clarity on the most urgent matters relating to health data, such as legal basis for processing, the implementation of adequate safeguards and the exercise of data subject rights.
The Guidelines note that the General Data Protection Regulation (“GDPR”) provides a specific derogation to the prohibition on processing of sensitive data under Article 9, for scientific purposes. With respect to the legal basis for processing, the Guidelines state that consent may be relied on under both Article 6 and the derogation to the prohibition on processing under Article 9 in the context of COVID-19, as long as the requirements for explicit consent are met, and as long as there is no power imbalance that could pressure or disadvantage a reluctant data subject. Researchers should keep in mind that study participants must be able to withdraw their consent at any time. National legislation may also provide an appropriate legal basis for the processing of health data and a derogation to the Article 9 prohibition. Furthermore, national laws may restrict data subject rights, though these restrictions should apply only as is strictly necessary.
In the context of transfers to countries outside the European Economic Area that have not been deemed adequate by the European Commission, the Guidelines note that the “public interest” derogation to the general prohibition on such transfers may be relied on, as well as explicit consent. The Guidelines add, however, that these derogations should only be relied on as a temporary measure and not for repetitive transfers.
The Guidelines highlight the importance of complying with the GDPR’s data protection principles, particularly with respect to transparency. Ideally, notice of processing as part of a research project should be provided to the relevant data subject before the project commences, if data has not been collected directly from the individual, in order to allow the individual to exercise their rights under the GDPR. There may be instances where, considering the number of data subjects, the age of the data and the safeguards in place, it would be impossible or require disproportionate effort to provide notice, in which case researchers may be able to rely on the exemptions set out under Article 14 of the GDPR.
The Guidelines also highlight that processing for scientific purposes is generally not considered incompatible with the purposes for which data is originally collected, assuming that the principles of data minimization, integrity, confidentiality and data protection by design and by default are complied with (See Guidelines)”.
Paper by Christina Koningisor: “Few contest the importance of a robust transparency regime in a democratic system of government. In the United States, the “crown jewel” of this regime is the Freedom of Information Act (FOIA). Yet despite widespread agreement about the importance of transparency in government, few are satisfied with FOIA. Since its enactment, the statute has engendered criticism from transparency advocates and critics alike for insufficiently serving the needs of both the public and the government. Legal scholars have widely documented these flaws in the federal public records law.
In contrast, scholars have paid comparatively little attention to transparency laws at the state and local level. This is surprising. The role of state and local government in the everyday lives of citizens has increased in recent decades, and many critical government functions are fulfilled by state and local entities today. Moreover, crucial sectors of the public namely, media and advocacy organizations—rely as heavily on state public records laws as they do on FOIA to hold the government to account. Yet these state laws and their effects remain largely overlooked, creating gaps in both local government law and transparency law scholarship.
This Article attempts to fill these gaps by surveying the state and local transparency regime, focusing on public records laws in particular. Drawing on hundreds of public records datasets, along with qualitative interviews, the Article demonstrates that in contrast with federal law, state transparency law introduces comparatively greater barriers to disclosure and comparatively higher burdens upon government. Further, the Article highlights the existence of “transparency deserts,” or localities in which a combination of poorly drafted transparency laws, hostile government actors, and weak local media and civil society impedes effective public oversight of government.
The Article serves as a corrective to the scholarship’s current, myopic focus on federal transparency law…(More)”.
Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….
Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage.
The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:
- Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
- Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
- Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
- Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
- Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
- Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.
Article by Daniel Wu and Mike Loukides: “…Apple learned a critical lesson from this experience. User buy-in cannot end with compliance with rules. It requires ethics, constantly asking how to protect, fight for, and empower users, regardless of what the law says. These strategies contribute to perceptions of trust.
Trust has to be earned, is easily lost, and is difficult to regain….
In our more global, diverse, and rapidly- changing world, ethics may be embodied by the “platinum rule”: Do unto others as they would want done to them. One established field of ethics—bioethics—offers four principles that are related to the platinum rule: nonmaleficence, justice, autonomy, and beneficence.
For organizations that want to be guided by ethics, regardless of what the law says, these principles as essential tools for a purpose-driven mission: protecting (nonmaleficence), fighting for (justice), and empowering users and employees (autonomy and beneficence).
An ethics leader protects users and workers in its operations by using governance best practices.
Before creating the product, it understands both the qualitative and quantitative contexts of key stakeholders, especially those who will be most impacted, identifying their needs and fears. When creating the product, it uses data protection by design, working with cross-functional roles like legal and privacy engineers to embed ethical principles into the lifecycle of the product and formalize data-sharing agreements. Before launching, it audits the product thoroughly and conducts scenario planning to understand potential ethical mishaps, such as perceived or real gender bias or human rights violations in its supply chain. After launching, its terms of service and collection methods are highly readable and enables even disaffected users to resolve issues delightfully.
Ethics leaders also fight for users and workers, who can be forgotten. These leaders may champion enforceable consumer protections in the first place, before a crisis erupts. With social movements, leaders fight powerful actors preying on vulnerable communities or the public at large—and critically examines and ameliorates its own participation in systemic violence. As a result, instead of last-minute heroic efforts to change compromised operations, it’s been iterating all along.
Finally, ethics leaders empower their users and workers. With diverse communities and employees, they co-create new products that help improve basic needs and enable more, including the vulnerable, to increase their autonomy and their economic mobility. These entrepreneurial efforts validate new revenue streams and relationships while incubating next-generation workers who self-govern and push the company’s mission forward. Employees voice their values and diversify their relationships. Alison Taylor, the Executive Director of Ethical Systems, argues that internal processes should “improve [workers’] reasoning and creativity, instead of short-circuiting them.” Enabling this is a culture of psychological safety and training to engage kindly with divergent ideas.
These purpose-led strategies boost employee performance and retention, drive deep customer loyalty, and carve legacies.
To be clear, Apple may be implementing at least some of these strategies already—but perhaps not uniformly or transparently. For instance, Apple has implemented some provisions of the European Union’s General Data Protection Regulation for all US residents—not just EU and CA residents—including the ability to access and edit data. This expensive move, which goes beyond strict legal requirements, was implemented even without public pressure.
But ethics strategies have major limitations leaders must address
As demonstrated by the waves of ethical “principles” released by Fortune 500 companies and commissions, ethics programs can be murky, dominated by a white, male, and Western interpretation.
Furthermore, focusing purely on ethics gives companies an easy way to “free ride” off social goodwill, but ultimately stay unaccountable, given the lack of external oversight over ethics programs. When companies substitute unaccountable data ethics principles for thoughtful engagement with the enforceable data regulation principles, users will be harmed.
Long-term, without the ability to wave a $100 million fine with clear-cut requirements and lawyers trained to advocate for them internally, ethics leaders may face barriers to buy-in. Unlike their sales, marketing, or compliance counterparts, ethics programs do not directly add revenue or reduce costs. In recessions, these “soft” programs may be the first on the chopping block.
As a result of these factors, we will likely see a surge in ethics-washing: well-intentioned companies that talk ethics, but don’t walk it. More will view these efforts as PR-driven ethics stunts, which don’t deeply engage with actual ethical issues. If harmful business models do not change, ethics leaders will be fighting a losing battle….(More)”.
The Economist: “Two decades ago Microsoft was a byword for a technological walled garden. One of its bosses called free open-source programs a “cancer”. That was then. On April 21st the world’s most valuable tech firm joined a fledgling movement to liberate the world’s data. Among other things, the company plans to launch 20 data-sharing groups by 2022 and give away some of its digital information, including data it has aggregated on covid-19.
Microsoft is not alone in its newfound fondness for sharing in the age of the coronavirus. “The world has faced pandemics before, but this time we have a new superpower: the ability to gather and share data for good,” Mark Zuckerberg, the boss of Facebook, a social-media conglomerate, wrote in the Washington Post on April 20th. Despite the EU’s strict privacy rules, some Eurocrats now argue that data-sharing could speed up efforts to fight the coronavirus.
But the argument for sharing data is much older than the virus. The OECD, a club mostly of rich countries, reckons that if data were more widely exchanged, many countries could enjoy gains worth between 1% and 2.5% of GDP. The estimate is based on heroic assumptions (such as putting a number on business opportunities created for startups). But economists agree that readier access to data is broadly beneficial, because data are “non-rivalrous”: unlike oil, say, they can be used and re-used without being depleted, for instance to power various artificial-intelligence algorithms at once.
Many governments have recognised the potential. Cities from Berlin to San Francisco have “open data” initiatives. Companies have been cagier, says Stefaan Verhulst, who heads the Governance Lab at New York University, which studies such things. Firms worry about losing intellectual property, imperilling users’ privacy and hitting technical obstacles. Standard data formats (eg, JPEG images) can be shared easily, but much that a Facebook collects with its software would be meaningless to a Microsoft, even after reformatting. Less than half of the 113 “data collaboratives” identified by the lab involve corporations. Those that do, including initiatives by BBVA, a Spanish bank, and GlaxoSmithKline, a British drugmaker, have been small or limited in scope.
Microsoft’s campaign is the most consequential by far. Besides encouraging more non-commercial sharing, the firm is developing software, licences and (with the Governance Lab and others) governance frameworks that permit firms to trade data or provide access to them without losing control. Optimists believe that the giant’s move could be to data what IBM’s embrace in the late 1990s of the Linux operating system was to open-source software. Linux went on to become a serious challenger to Microsoft’s own Windows and today underpins Google’s Android mobile software and much of cloud-computing…(More)”.
Stuart Mills at Behavioural Public Policy: “A criticism of behavioural nudges is that they lack precision, sometimes nudging people who – had their personal circumstances been known – would have benefitted from being nudged differently. This problem may be solved through a programme of personalized nudging. This paper proposes a two-component framework for personalization that suggests choice architects can personalize both the choices being nudged towards (choice personalization) and the method of nudging itself (delivery personalization). To do so, choice architects will require access to heterogeneous data.
This paper argues that such data need not take the form of big data, but agrees with previous authors that the opportunities to personalize nudges increase as data become more accessible. Finally, this paper considers two challenges that a personalized nudging programme must consider, namely the risk personalization poses to the universality of laws, regulation and social experiences, and the data access challenges policy-makers may encounter….(More)”.