Not fit for Purpose: A critical analysis of the ‘Five Safes’


Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.

Putting Games to Work in the Battle Against COVID-19


Sara Frueh at the National Academies: “While video games often give us a way to explore other worlds, they can also help us learn more about our own — including how to navigate a pandemic. That was the premise underlying “Jamming the Curve,” a competition that enlisted over 400 independent video game developers around the world to develop concepts for games that reflect the real-world dynamics of COVID-19.

“Games can help connect our individual actions to larger-scale impact … and help translate data into engaging stories,” said Rick Thomas, associate program officer of LabX, a program of the National Academy of Sciences that supports creative approaches to public engagement.

Working with partners IndieCade and Georgia Tech, LabX brought Jamming the Curve to life over two weeks in September.

The “game jam” generated over 50 game concepts that drop players into a wide array of roles — from a subway rider trying to minimize the spread of infection among passengers, to a grocery store cashier trying to help customers while avoiding COVID-19, to a fox ninja tasked with dispensing masks to other forest creatures.

The five winning game concepts (see below) were announced at an award ceremony in late October, where each winning team was given a $1,000 prize and the chance to compete for a $20,000 grant to develop their game further.

The power of games

“Sometimes public health concepts can be a little dry,” said Carla Alvarado, a public health expert and program officer at the National Academies who served as a judge for the competition, during the awards ceremony. “Games package that information — it’s bite-sized, it’s digestible, and it’s palatable.”

And because games engage the senses and involve movement, they help people remember what they learn, she said. “That type of learning — experiential learning — helps retain a lot of the concepts.”

The idea of doing a game jam around COVID-19 began when Janet Murray of Georgia Tech reached out to Stephanie Barish and her colleagues at IndieCade about games’ potential to help express the complicated data around the disease. “Not everybody really knows how to look at that all of that information, and games are so wonderful at reaching people in ways that people understand,” Barish said.

Rick Thomas and the LabX team heard about the idea for Jamming the Curve and saw how they could contribute. The program had experience organizing other game projects around role-playing and storytelling — along with access to a range of scientists and public health experts through the National Academies’ networks.

“Given the high stakes of the topic around COVID-19 and the amount of misinformation around the pandemic, we really needed to make sure that we were doing this right when it came to creating these games,” said Thomas. LabX helped to recruit public health professionals involved in the COVID-19 response, as well as experts in science communication and risk perception, to serve as mentors to the game developers.

Play the Winning Games!

Trailers and some playable prototypes for the five winning game concepts can be found online:

  • Everyday Hero, in which players work to stop the spread of COVID-19 through measures such as social distancing and mask use
  • PandeManager, which gives players the job of a town’s mayor who must slow the spread of disease among citizens
  • Lab Hero, in which users play a first responder who is working hard to find a vaccine while following proper health protocols
  • Cat Colony Crisis, in which a ship of space-faring cats must deal with a mysterious disease outbreak
  • Outbreak in Space, which challenges players to save friends and family from a spreading epidemic in an alien world

All of the games submitted to Jamming the Curve can be found at itch.io.

The games needed to be fun as well as scientifically accurate — and so IndieCade, Georgia Tech, and Seattle Indies recruited gaming experts who could advise participants on how to make their creations engaging and easy to understand….(More)“.

NIH Releases New Policy for Data Management and Sharing


NIH Blogpost by Carrie Wolinetz: “Today, nearly twenty years after the publication of the Final NIH Statement on Sharing Research Data in 2003, we have released a Final NIH Policy for Data Management and Sharing. This represents the agency’s continued commitment to share and make broadly available the results of publicly funded biomedical research. We hope it will be a critical step in moving towards a culture change, in which data management and sharing is seen as integral to the conduct of research. Responsible data management and sharing is good for science; it maximizes availability of data to the best and brightest minds, underlies reproducibility, honors the participation of human participants by ensuring their data is both protected and fully utilized, and provides an element of transparency to ensure public trust and accountability.

This policy has been years in the making and has benefited enormously from feedback and input from stakeholders throughout the process. We are grateful to all those who took the time to comment on Request for Information, the Draft policy, or to participate in workshops or Tribal consultations. That thoughtful feedback has helped shape the Final policy, which we believe strikes a balance between reasonable expectations for data sharing and flexibility to allow for a diversity of data types and circumstances. How we incorporated public comments and decision points that led to the Final policy are detailed in the Preamble to the DMS policy.

The Final policy applies to all research funded or conducted by NIH that results in the generation of scientific data. The Final Policy has two main requirements (1) the submission of a Data Management and Sharing Plan (Plan); and (2) compliance with the approved Plan. We are asking for Plans at the time of submission of the application, because we believe planning and budgeting for data management and sharing needs to occur hand in hand with planning the research itself. NIH recognizes that science evolves throughout the research process, which is why we have built in the ability to update DMS Plans, but at the end of the day, we are expecting investigators and institutions to be accountable to the Plans they have laid out for themselves….

Anticipating that variation in readiness, and in recognition of the cultural change we are trying to seed, there is a two-year implementation period. This time will be spent developing the information, support, and tools that the biomedical enterprise will need to comply with this new policy. NIH has already provided additional supplementary information – on (1) elements of a data management and sharing plan; (2) allowable costs; and (3) selecting a data repository – in concert with the policy release….(More)”

The CARE Principles for Indigenous Data Governance


Paper by Stephanie Russo Carroll et al: “Concerns about secondary use of data and limited opportunities for benefit-sharing have focused attention on the tension that Indigenous communities feel between (1) protecting Indigenous rights and interests in Indigenous data (including traditional knowledges) and (2) supporting open data, machine learning, broad data sharing, and big data initiatives. The International Indigenous Data Sovereignty Interest Group (within the Research Data Alliance) is a network of nation-state based Indigenous data sovereignty networks and individuals that developed the ‘CARE Principles for Indigenous Data Governance’ (Collective Benefit, Authority to Control, Responsibility, and Ethics) in consultation with Indigenous Peoples, scholars, non-profit organizations, and governments. The CARE Principles are people– and purpose-oriented, reflecting the crucial role of data in advancing innovation, governance, and self-determination among Indigenous Peoples. The Principles complement the existing data-centric approach represented in the ‘FAIR Guiding Principles for scientific data management and stewardship’ (Findable, Accessible, Interoperable, Reusable). The CARE Principles build upon earlier work by the Te Mana Raraunga Maori Data Sovereignty Network, US Indigenous Data Sovereignty Network, Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective, and numerous Indigenous Peoples, nations, and communities. The goal is that stewards and other users of Indigenous data will ‘Be FAIR and CARE.’ In this first formal publication of the CARE Principles, we articulate their rationale, describe their relation to the FAIR Principles, and present examples of their application….(More)” See also Selected Readings on Indigenous Data Sovereignty.

Your phone already tracks your location. Now that data could fight voter suppression


Article by Seth Rosenblatt: “Smartphone location data is a dream for marketers who want to know where you go and how long you spend there—and a privacy nightmare. But this kind of geolocation data could also be used to protect people’s voting rights on Election Day.

The newly founded nonprofit Center for New Data is now tracking voters at the polls using smartphone location data to help researchers understand how easy—or difficult—it is for people to vote in different places. Called the Observing Democracy project, the nonpartisan effort is making data on how far people have to travel to vote and how long they have to wait in line available in a privacy-friendly way so it can be used to craft election policies that ensure voting is accessible for everyone.

Election data has already fueled changes in various municipalities and states. A 66-page lawsuit filed by Fair Fight Action against the state of Georgia in the wake of Stacey Abrams’s narrow loss to Brian Kemp in the 2018 gubernatorial race relies heavily on data to back its assertions of unconstitutionally delayed and deferred voter registration, unfair challenges to absentee and provisional ballots, and unjustified purges of voter rolls—all hallmarks of voter suppression.

The promise of Observing Democracy is to make this type of impactful data available much more rapidly than ever before. Barely a month old, Observing Democracy isn’t wasting any time: Its all-volunteer staffers will be receiving data potentially as soon as Nov. 4 on voter wait times at polling locations, travel times to polling stations, and how frequently ballot drop-off boxes are visited, courtesy of location-data mining companies X-Mode Social and Veraset, which was spun off from SafeGraph….(More)”.

To mitigate the costs of future pandemics, establish a common data space


Article by Stephanie Chin and Caitlin Chin: “To improve data sharing during global public health crises, it is time to explore the establishment of a common data space for highly infectious diseases. Common data spaces integrate multiple data sources, enabling a more comprehensive analysis of data based on greater volume, range, and access. At its essence, a common data space is like a public library system, which has collections of different types of resources from books to video games; processes to integrate new resources and to borrow resources from other libraries; a catalog system to organize, sort, and search through resources; a library card system to manage users and authorization; and even curated collections or displays that highlight themes among resources.

Even before the COVID-19 pandemic, there was significant momentum to make critical data more widely accessible. In the United States, Title II of the Foundations for Evidence-Based Policymaking Act of 2018, or the OPEN Government Data Act, requires federal agencies to publish their information online as open data, using standardized, machine-readable data formats. This information is now available on the federal data.gov catalog and includes 50 state- or regional-level data hubs and 47 city- or county-level data hubs. In Europe, the European Commission released a data strategy in February 2020 that calls for common data spaces in nine sectors, including healthcare, shared by EU businesses and governments.

Going further, a common data space could help identify outbreaks and accelerate the development of new treatments by compiling line list incidence data, epidemiological information and models, genome and protein sequencing, testing protocols, results of clinical trials, passive environmental monitoring data, and more.

Moreover, it could foster a common understanding and consensus around the facts—a prerequisite to reach international buy-in on policies to address situations unique to COVID-19 or future pandemics, such as the distribution of medical equipment and PPE, disruption to the tourism industry and global supply chains, social distancing or quarantine, and mass closures of businesses….(More). See also Call for Action for a Data Infrastructure to tackle Pandemics and other Dynamic Threats.

The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

How to Use the Bureaucracy to Govern Well


Good Governance Paper by Rebecca Ingber:”…Below I offer four concrete recommendations for deploying Intentional Bureaucratic Architecture within the executive branch. But first, I will establish three key background considerations that provide context for these recommendations.  The focus of this piece is primarily executive branch legal decisionmaking, but many of these recommendations apply equally to other areas of policymaking.

First, make room for the views and expertise of career officials. As a political appointee entering a new office, ask those career officials: What are the big issues on the horizon on which we will need to take policy or legal views?  What are the problems with the positions I am inheriting?  What is and is not working?  Where are the points of conflict with our allies abroad or with Congress?  Career officials are the institutional memory of the government and often the only real experts in the specific work of their agency.  They will know about the skeletons in the closet and where the bodies are buried and all the other metaphors for knowing things that other people do not. Turn to them early. Value them. They will have views informed by experience rather than partisan politics. But all bureaucratic actors, including civil servants, also bring to the table their own biases, and they may overvalue the priorities of their own office over others. Valuing their role does not mean handing the reins over to the civil service—good governance requires exercising judgement and balancing the benefits of experience and expertise with fresh eyes and leadership. A savvy bureaucratic actor might know how to “get around” the bureaucratic roadblocks, but the wise bureaucratic player also knows how much the career bureaucracy has to offer and exercises judgment based in clear values about when to defer and when to overrule.

Second, get ahead of decisions: choose vehicles for action carefully and early. The reality of government life is that much of the big decisionmaking happens in the face of a fire drill. As I’ve written elsewhere, the trigger or “interpretation catalyst” that compels the government to consider and assert a position—in other words, the cause of that fire drill—shapes the whole process of decisionmaking and the resulting decision. When an issue arises in defensive litigation, a litigation-driven process controls.  That means that career line attorneys shape the government’s legal posture, drawing from longstanding positions and often using language from old briefs. DOJ calls the shots in a context biased toward zealous defense of past action. That looks very different from a decisionmaking process that results from the president issuing an executive order or presidential memorandum, a White House official deciding to make a speech, the State Department filing a report with a treaty body, or DOD considering whether to engage in an operation involving force. Each of these interpretation catalysts triggers a different process for decisionmaking that will shape the resulting outcome.  But because of the stickiness of government decisions—and the urgent need to move on to the next fire drill—these positions become entrenched once taken. That means that the process and outcome are driven by the hazards of external events, unless officials find ways to take the reins and get ahead of them.

And finally, an incoming administration must put real effort into Intentional Bureaucratic Architecture by deliberately and deliberatively creating and managing the bureaucratic processes in which decisionmaking happens. Novel issues arise and fire drills will inevitably happen in even the best prepared administrations.  The bureaucratic architecture will dictate how decisionmaking happens from the novel crises to the bread and butter of daily agency work. There are countless varieties of decisionmaking models inside the executive branch, which I have classified in other work. These include a unitary decider model, of which DOJ’s Office of Legal Counsel (OLC) is a prime example, an agency decider model, and a group lawyering model. All of these models will continue to co-exist. Most modern national security decisionmaking engages the interests and operations of multiple agencies. Therefore, in a functional government, most of these decisions will involve group lawyering in some format—from agency lawyers picking up the phone to coordinate with counterparts in other agencies to ad hoc meetings to formal regularized working groups with clear hierarchies all the way up to the cabinet. Often these processes evolve organically, as issues arise. Some are created from the top down by presidential administrations that want to impose order on the process. But all of these group lawyering dynamics often lack a well-defined process for determining the outcome in cases of conflict or deciding how to establish a clear output. This requires rule setting and organizing the process from the top down….(More).

Tracking COVID-19: U.S. Public Health Surveillance and Data


CRS Report: “Public health surveillance, or ongoing data collection, is an essential part of public health practice. Particularly during a pandemic, timely data are important to understanding the epidemiology of a disease in order to craft policy and guide response decision making. Many aspects of public health surveillance—such as which data are collected and how—are often governed by law and policy at the state and sub federal level, though informed by programs and expertise at the Centers for Disease Control and Prevention (CDC). The Coronavirus Disease 2019 (COVID-19) pandemic has exposed limitations and challenges with U.S. public health surveillance, including those related to the timeliness, completeness, and accuracy of data.

This report provides an overview of U.S. public health surveillance, current COVID-19 surveillance and data collection, and selected policy issues that have been highlighted by the pandemic.Appendix B includes a compilation of selected COVID-19 data resources….(More)”.

AI’s Wide Open: A.I. Technology and Public Policy


Paper by Lauren Rhue and Anne L. Washington: “Artificial intelligence promises predictions and data analysis to support efficient solutions for emerging problems. Yet, quickly deploying AI comes with a set of risks. Premature artificial intelligence may pass internal tests but has little resilience under normal operating conditions. This Article will argue that regulation of early and emerging artificial intelligence systems must address the management choices that lead to releasing the system into production. First, we present examples of premature systems in the Boeing 737 Max, the 2020 coronavirus pandemic public health response, and autonomous vehicle technology. Second, the analysis highlights relevant management practices found in our examples of premature AI. Our analysis suggests that redundancy is critical to protecting the public interest. Third, we offer three points of context for premature AI to better assess the role of management practices.

AI in the public interest should: 1) include many sensors and signals; 2) emerge from a broad range of sources; and 3) be legible to the last person in the chain. Finally, this Article will close with a series of policy suggestions based on this analysis. As we develop regulation for artificial intelligence, we need to cast a wide net to identify how problems develop within the technologies and through organizational structures….(More)”.