Worldbank Report: “Facing the COVID-19 pandemic requires an unprecedented degree of cooperation between governments and citizens and across all facets of society to implement spatial distancing and other policy measures. This paper proposes to think about handling the pandemic as a collective action problem that can be alleviated by policies that foster trust and social connection. Policy and institutional recommendations are presented according to a three-layered pandemic response generally corresponding to short-, medium-, and long-term needs. This paper focuses on building connection and cooperation as means to bring about better health and socioeconomic outcomes. Many factors outside the paper’s scope, such as health policy choices, will greatly affect the outcomes. As such, the paper explores the role of trust, communication, and collaboration conditional on sound health and economic policy choices…(More)”.
Nudging in Public Policy
Alice Moseley in the Oxford Research Encyclopedia of Politics: “Nudging” in public policy involves using behavioral, economic, and psychological insights to influence the behavior of policy targets in order to help achieve policy goals. This approach to public policy was advocated by Thaler and Sunstein in their book Nudge in 2008. Nudging is underpinned by a conception that individuals use mental shortcuts (heuristics) in day-to-day decision-making, shortcuts that do not always serve their long-term interests (for instance, in relation to eating and exercise patterns, road safety, or saving for the future). Nudging does not involve seeking to persuade individuals about the merits of pursuing particular courses of action that will better serve their long-term welfare. Rather, it involves altering the choice environment so that when people follow their instincts, using familiar mental shortcuts, the most prominent option available to the policy target will be one that is likely to promote their own welfare, and that of society more widely. Nudging has come to be considered a core part of the policy toolkit in many countries but academic scholarship has also debated the ethical dimensions of nudging, and there is a flourishing research literature on the efficacy, public acceptability, merits, and limitations of this approach within public policy….(More)”.
Creating a digital commons
Report by the IPPR (UK): ” There are, today, almost no parts of life that are untouched by the presence of data. Virtually every action we take produces some form of digital trail – our phones track our locations, our browsers track searches, our social network apps log our friends and family – even when we are only dimly aware of it.
It is the combination of this near-ubiquitous gathering of data with fast processing that has generated the economic and social transformation of the last few years – one that, if current developments in artificial intelligence (AI) continue, is only likely to accelerate. Combined with data-enabled technology, from the internet of things to 3D printing, we are potentially on the cusp of a radically different economy and society.
As the world emerges from the first phase of the pandemic, the demands for a socially just and sustainable recovery have grown. The data economy can and should be an essential part of that reconstruction, from the efficient management of energy systems to providing greater flexibility in working time. However, without effective public policy, and democratic oversight and management, the danger is that the tendencies in the data economy that we have already seen towards monopoly and opacity – reinforced, so far, by the crisis – will continue to dominate. It is essential, then, that planning for a fairer, more sustainable economy in the future build in active public policy for data…
This report focusses closely on data as the fundamental building block of the emerging economy, and argues that its use, management, ownership, and control as critical to shaping the future…(More)”.
20’s the limit: How to encourage speed reductions
Report by The Wales Centre for Public Policy: “This report has been prepared to support the Welsh Government’s plan to introduce a 20mph national default speed limit in 2022. It aims to address two main questions: 1) What specific behavioural interventions might be implemented to promote driver compliance with 20mph speed limits in residential areas; and 2) are there particular demographics, community characteristics or other features that should form the basis of a segmentation approach?
The reasons for speeding are complex, but many behaviour change
techniques have been successfully applied to road safety, including some which use behavioural insights or “nudges”.
Drivers can be segmented into three types: defiers (a small minority),
conformers (the majority) and champions (a minority). Conformers are law abiding citizens who respect social norms – getting this group to comply can achieve a tipping point.
Other sectors have shown that providing information is only effective if part of a wider package of measures and that people are most open to
change at times of disruption or learning (e.g. learner drivers)….(More)”.
Project Patient Voice
Press Release: “The U.S. Food and Drug Administration today launched Project Patient Voice, an initiative of the FDA’s Oncology Center of Excellence (OCE). Through a new website, Project Patient Voice creates a consistent source of publicly available information describing patient-reported symptoms from cancer trials for marketed treatments. While this patient-reported data has historically been analyzed by the FDA during the drug approval process, it is rarely included in product labeling and, therefore, is largely inaccessible to the public.
“Project Patient Voice has been initiated by the Oncology Center of Excellence to give patients and health care professionals unique information on symptomatic side effects to better inform their treatment choices,” said FDA Principal Deputy Commissioner Amy Abernethy, M.D., Ph.D. “The Project Patient Voice pilot is a significant step in advancing a patient-centered approach to oncology drug development. Where patient-reported symptom information is collected rigorously, this information should be readily available to patients.”
Patient-reported outcome (PRO) data is collected using questionnaires that patients complete during clinical trials. These questionnaires are designed to capture important information about disease- or treatment-related symptoms. This includes how severe or how often a symptom or side effect occurs.
Patient-reported data can provide additional, complementary information for health care professionals to discuss with patients, specifically when discussing the potential side effects of a particular cancer treatment. In contrast to the clinician-reported safety data in product labeling, the data in Project Patient Voice is obtained directly from patients and can show symptoms before treatment starts and at multiple time points while receiving cancer treatment.
The Project Patient Voice website will include a list of cancer clinical trials that have available patient-reported symptom data. Each trial will include a table of the patient-reported symptoms collected. Each patient-reported symptom can be selected to display a series of bar and pie charts describing the patient-reported symptom at baseline (before treatment starts) and over the first 6 months of treatment. This information provides insights into side effects not currently available in standard FDA safety tables, including existing symptoms before the start of treatment, symptoms over time, and the subset of patients who did not have a particular symptom prior to starting treatment….(More)”.
Interventions to mitigate the racially discriminatory impacts of emerging tech including AI
Joint Civil Society Statement: “As widespread recent protests have highlighted, racial inequality remains an urgent and devastating issue around the world, and this is as true in the context of technology as it is everywhere else. In fact, it may be more so, as algorithmic technologies based on big data are deployed at previously unimaginable scale, reproducing the discriminatory systems that build and govern them.
The undersigned organizations welcome the publication of the report “Racial discrimination and emerging digital technologies: a human rights analysis,” by Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, and wish to underscore the importance and timeliness of a number of the recommendations made therein:
- Technologies that have had or will have significant racially discriminatory impacts should be banned outright.
While incremental regulatory approaches may be appropriate in some contexts, where a technology is demonstrably likely to cause racially discriminatory harm, it should not be deployed until that harm can be prevented. Moreover, certain technologies may always have disparate racial impacts, no matter how much their accuracy can be improved. In the present moment, racially discriminatory technologies include facial and affect recognition technology and so-called predictive analytics. We support Special Rapporteur Achiume’s call for mandatory human rights impact assessments as a prerequisite for the adoption of new technologies. We also believe that where such assessments reveal that a technology has a high likelihood of deleterious racially disparate impacts, states should prevent its use through a ban or moratorium. We join the Special Rapporteur in welcoming recent municipal bans, for example, on the use of facial recognition technology, and encourage national governments to adopt similar policies. Correspondingly, we reiterate our support for states’ imposition of an immediate moratorium on the trade and use of privately developed surveillance tools until such time as states enact appropriate safeguards, and congratulate Special Rapporteur Achiume on joining that call. - Gender mainstreaming and representation along racial, national and other intersecting identities requires radical improvement at all levels of the tech sector.…
- Technologists cannot solve political, social, and economic problems without the input of domain experts and those personally impacted.…
- Access to technology is as urgent an issue of racial discrimination as inequity in the design of technologies themselves.…
- Representative and disaggregated data is a necessary, if not sufficient, condition for racial equity in emerging digital technologies, but it must be collected and managed equitably as well.…
- States as well as corporations must provide remedies for racial discrimination, including reparations.… (More)”.
The Misinformation Edition
On-Line Exhibition by the Glass Room: “…In this exhibition – aimed at young people as well as adults – we explore how social media and the web have changed the way we read information and react to it. Learn why finding “fake news” is not as easy as it sounds, and how the term “fake news” is as much a problem as the news it describes. Dive into the world of deep fakes, which are now so realistic that they are virtually impossible to detect. And find out why social media platforms are designed to keep us hooked, and how they can be used to change our minds. You can also read our free Data Detox Kit, which reveals how to tell facts from fiction and why it benefits everyone around us when we take a little more care about what we share…(More)”.
EXPLORE OUR ONLINE EXHIBITION

The Atlas of Surveillance
Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.
That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.
We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.
Four Principles for Integrating AI & Good Governance
Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.
Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?
We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:
Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”
Tackling the misinformation epidemic with “In Event of Moon Disaster”
MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”
This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….
Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.