Artificial Intelligence in the COVID-19 Response


Report by Sean Mann, Carl Berdahl, Lawrence Baker, and Federico Girosi: “We conducted a scoping review to identify AI applications used in the clinical and public health response to COVID-19. Interviews with stakeholders early in the research process helped inform our research questions and guide our study design. We conducted a systematic search, screening, and full text review of both academic and gray literature…

  • AI is still an emerging technology in health care, with growing but modest rates of adoption in real-world clinical and public health practice. The COVID-19 pandemic showcased the wide range of clinical and public health functions performed by AI as well as the limited evidence available on most AI products that have entered use.
  • We identified 66 AI applications (full list in Appendix A) used to perform a wide range of diagnostic, prognostic, and treatment functions in the clinical response to COVID-19. This included applications used to analyze lung images, evaluate user-reported symptoms, monitor vital signs, predict infections, and aid in breathing tube placement. Some applications were used by health systems to help allocate scarce resources to patients.
  • Many clinical applications were deployed early in the pandemic, and most were used in the United States, other high-income countries, or China. A few applications were used to care for hundreds of thousands or even millions of patients, although most were used to an unknown or limited extent.
  • We identified 54 AI-based public health applications used in the pandemic response. These included AI-enabled cameras used to monitor health-related behavior and health messaging chatbots used to answer questions about COVID-19. Other applications were used to curate public health information, produce epidemiologic forecasts, or help prioritize communities for vaccine allocation and outreach efforts.
  • We found studies supporting the use of 39 clinical applications and 8 public health applications, although few of these were independent evaluations, and we found no clinical trials evaluating any application’s impact on patient health. We found little evidence available on entire classes of applications, including some used to inform care decisions such as patient deterioration monitors.
  • Further research is needed, particularly independent evaluations on application performance and health impacts in real-world care settings. New guidance may be needed to overcome the unique challenges to evaluating AI application impacts on patient- and population-level health outcomes….(More)” – See also: The #Data4Covid19 Review

How We Ruined The Internet


Paper by Micah Beck and Terry Moore: “At the end of the 19th century the logician C.S. Peirce coined the term “fallibilism” for the “… the doctrine that our knowledge is never absolute but always swims, as it were, in a continuum of uncertainty and of indeterminacy”. In terms of scientific practice, this means we are obliged to reexamine the assumptions, the evidence, and the arguments for conclusions that subsequent experience has cast into doubt. In this paper we examine an assumption that underpinned the development of the Internet architecture, namely that a loosely synchronous point-to-point datagram delivery service could adequately meet the needs of all network applications, including those which deliver content and services to a mass audience at global scale. We examine how the inability of the Networking community to provide a public and affordable mechanism to support such asynchronous point-to-multipoint applications led to the development of private overlay infrastructure, namely CDNs and Cloud networks, whose architecture stands at odds with the Open Data Networking goals of the early Internet advocates. We argue that the contradiction between those initial goals and the monopolistic commercial imperatives of hypergiant overlay infrastructure operators is an important reason for the apparent contradiction posed by the negative impact of their most profitable applications (e.g., social media) and strategies (e.g., targeted advertisement). We propose that, following the prescription of Peirce, we can only resolve this contradiction by reconsidering some of our deeply held assumptions…(More)”.

The History of Rules


Interview with Lorraine Daston: “The rules book began with an everyday observation of the dazzling variety and ubiquity of rules. Every culture has rules, but they’re all different.

I eventually settled on three major meanings of rules: rules as laws, rules as algorithms, and finally, rules as models. The latter meaning was predominant in the Western tradition until the end of the 18th century, and I set out to trace what happened to rules as models, but also the rise of algorithmic rules. It’s hard to imagine now, but the word algorithm didn’t even have an entry in the most comprehensive mathematical encyclopedias of the late 19th century.

To get at these changes over time, I cast my nets very wide. I looked at cookbooks, I looked at the rules of warfare. I looked at rules of games. I looked at rules of monastic orders and traffic regulations, sumptuary regulations, spelling rules, and of course algorithms for how to calculate. And if there’s one take-home message from the book, it is a distinction between thick and thin rules.

Thick rules are rules that come upholstered with all manner of qualifications, examples, caveats, and exceptions. They are rules that are braced to confront a world in which recalcitrant particulars refuse to conform to universals—as opposed to thin rules, of which algorithms are perhaps the best prototype: thin rules are formulated without attention to circumstances. Thin rules brook no quarter, they offer no sense of a variable world. Many bureaucratic rules, especially bureaucratic rules in their Kafkaesque exaggeration, also fit this description.

The arc of the book is not to describe how thick rules became thin rules (because we still have thick and thin rules around us all the time), but rather to determine the point at which thick rules become necessary—when you must anticipate high variability and therefore must tweak your rule to fit circumstances—as opposed to the stable, predictable settings in which we turn to thin rules.

In some historically exceptional cases, thin rules can actually get a job done because the context can be standardized and stabilized…(More)”.

The Metaverse and Homeland Security


Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

Yes, No, Maybe? Legal & Ethical Considerations for Informed Consent in Data Sharing and Integration


Report by Deja Kemp, Amy Hawn Nelson, & Della Jenkins: “Data sharing and integration are increasingly commonplace at every level of government, as cross-program and cross-sector data provide valuable insights to inform resource allocation, guide program implementation, and evaluate policies. Data sharing, while routine, is not without risks, and clear legal frameworks for data sharing are essential to mitigate those risks, protect privacy, and guide responsible data use. In some cases, federal privacy laws offer clear consent requirements and outline explicit exceptions where consent is not required to share data. In other cases, the law is unclear or silent regarding whether consent is needed for data sharing. Importantly, consent can present both ethical and logistical challenges, particularly when integrating cross-sector data. This brief will frame out key concepts related to consent; explore major federal laws governing the sharing of administrative data, including individually identifiable information; and examine important ethical implications of consent, particularly in cases when the law is silent or unclear. Finally, this brief will outline the foundational role of strong governance and consent frameworks in ensuring ethical data use and offer technical alternatives to consent that may be appropriate for certain data uses….(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

A Global Digital Compact — an Open, Free and Secure Digital Future for All


UN Secretary General: “…The present brief proposes the development of a Global Digital Compact that would set out principles, objectives and actions for advancing an open, free, secure and human-centred digital future, one that is anchored in universal human rights and that enables the attainment of the Sustainable Development Goals. It outlines areas in which the need for multi-stakeholder digital cooperation is urgent and sets out how a Global Digital Compact can help to realize the commitment in the declaration on the commemoration of the seventy-fifth anniversary of the United Nations (General Assembly resolution 75/1) to “shaping a shared vision on digital cooperation” by providing an inclusive global framework. Such a framework is essential for the multi-stakeholder action required to overcome digital, data and innovation divides and to achieve the governance required for a sustainable digital future.
Our digital world is one of divides. In 2002, when governments first recognized the challenge of
the digital divide, 1 billion people had access to the Internet. Today, 5.3 billion people are digitally
connected, yet the divide persists across regions, gender, income, language, and age groups. Some 89 per cent of people in Europe are online, but only 21 per cent of women in low-income countries use the Internet. While digitally deliverable services now account for almost two thirds of global services trade, access is unaffordable in some parts of the world. The cost of a smartphone in South Asia and sub-Saharan Africa is more than 40 per cent of the average monthly income, and African users pay more than three times the global average for mobile data. Fewer than half of the world’s countries track digital
skills, and the data that exist highlight the depth of digital learning gaps. Two decades after the
World Summit on the Information Society, the digital divide is still a gulf.

Data divides are also growing. As data are collected and used in digital applications, they generate huge commercial and social value. While monthly global data traffic is forecast to grow by more than 400 per cent by 2026, activity is concentrated among a few global players. Many developing countries are at risk of becoming mere providers of raw data while having to pay for the services that their data help to produce…(More)”.

Global Data Stewardship


On-line Course by Stefaan G. Verhulst: “Creating a systematic and sustainable data access program is critical for data stewardship. What you do with your data, how you reuse it, and how you make it available to the general public can help others reimagine what’s possible for data sharing and cross-sector data collaboration. In this course, instructor Stefaan Verhulst shows you how to develop and manage data reuse initiatives as a competent and responsible global data steward.

Following the insights of current research and practical, real-world examples, learn about the growing importance of data stewardship, data supply, and data demand to understand the value proposition and societal case for data reuse. Get tips on designing and implementing data collaboration models, governance framework, and infrastructure, as well as best practices for measuring, sunsetting, and supporting data reuse initiatives. Upon completing this course, you’ll be ready to start pushing your new skill set and continue your data stewardship learning journey….(More)”

Rethinking democracy for the age of AI


Keynote speech by Bruce Schneier: “There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking … at every scale. From the individual all the way up to the whole of society.

 For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion…(More)”

The 2023 State of UserCentriCities


Report by UserCentricities: “Did you know that Rotterdam employs 25 service designers and a user-interface lab? That the property tax payment in Bratislava is reviewed and improved every year? That Ghent automatically offers school benefits to families in need, using data held by different levels of administration? That Madrid processed 70% of registrations in digital form in 2022, up from 23% in 2019? That Kyiv, despite the challenges of war, has continuously updated its city app adding new services daily for citizens in need, such as a map of bomb shelters and heating points? Based on data gathered from the UserCentriCities Dashboard, UserCentriCities launches The 2023 State of UserCentriCities: How Cities and Regions are Delivering Effective Services by Putting Citizens’ Needs at the Centre, an analysis of the performance of European cities and regions against 41 indicators inspired by The 2017 Tallinn Declaration on eGovernment.…(More)”.