NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems


Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.

Mass, Computer-Generated, and Fraudulent Comments


Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.

Sandwich Strategy


Article by the Accountability Research Center: “The “sandwich strategy” describes an interactive process in which reformers in government encourage citizen action from below, driving virtuous circles of mutual empowerment between pro-accountability actors in both state and society.

The sandwich strategy relies on mutually-reinforcing interaction between pro-reform actors in both state and society, not just initiatives from one or the other arena. The hypothesis is that when reformers in government tangibly reduce the risks/costs of collective action, that process can bolster state-society pro-reform coalitions that collaborate for change. While this process makes intuitive sense, it can follow diverse pathways and encounter many roadblocks. The dynamics, strengths and limitations of sandwich strategies have not been documented and analyzed systematically. The figure below shows a possible pathway of convergence and conflict between actors for and against change in both state and society….(More)”.

sandwich strategy

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


Report by Pew Research Center: “Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s facestranslate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030….(More)”

Crisis Innovation Policy from World War II to COVID-19


Paper by Daniel P. Gross & Bhaven N. Sampat: “Innovation policy can be a crucial component of governments’ responses to crises. Because speed is a paramount objective, crisis innovation may also require different policy tools than innovation policy in non-crisis times, raising distinct questions and tradeoffs. In this paper, we survey the U.S. policy response to two crises where innovation was crucial to a resolution: World War II and the COVID-19 pandemic. After providing an overview of the main elements of each of these efforts, we discuss how they compare, and to what degree their differences reflect the nature of the central innovation policy problems and the maturity of the U.S. innovation system. We then explore four key tradeoffs for crisis innovation policy—top-down vs. bottom-up priority setting, concentrated vs. distributed funding, patent policy, and managing disruptions to the innovation system—and provide a logic for policy choices. Finally, we describe the longer-run impacts of the World War II effort and use these lessons to speculate on the potential long-run effects of the COVID-19 crisis on innovation policy and the innovation system….(More)”.

Bridging the global digital divide: A platform to advance digital development in low- and middle-income countries


Paper by George Ingram: “The world is in the midst of a fast-moving, Fourth Industrial Revolution (also known as 4IR or Industry 4.0), driven by digital innovation in the use of data, information, and technology. This revolution is affecting everything from how we communicate, to where and how we work, to education and health, to politics and governance. COVID-19 has accelerated this transformation as individuals, companies, communities, and governments move to virtual engagement. We are still discovering the advantages and disadvantages of a digital world.

This paper outlines an initiative that would allow the United States, along with a range of public and private partners, to seize the opportunity to reduce the digital divide between nations and people in a way that benefits inclusive economic advancement in low- and middle-income countries, while also advancing the economic and strategic interests of the United States and its partner countries.

As life increasingly revolves around digital technologies and innovation, countries are in a race to digitalize at a speed that threatens to leave behind the less advantaged—countries and underserved groups. Data in this paper documents the scope of the digital divide. With the Sustainable Development Goals (SDGs), the world committed to reduce poverty and advance all aspects of the livelihood of nations and people. Countries that fail to progress along the path to 5G broadband cellular networks will be unable to unlock the benefits of the digital revolution and be left behind. Donors are recognizing this and offering solutions, but in a one-off, disconnected fashion. Absent a comprehensive, partnership approach, that takes advantage of the comparative advantage of each, these well-intended efforts will not aggregate to the scale and speed required by the challenge….(More)”.

Introducing the AI Localism Repository


The GovLab: “Artificial intelligence is here to stay. As this technology advances—both in its complexity and ubiquity across our societies—decision-makers must address the growing nuances of AI regulation and oversight. Early last year, The GovLab’s Stefaan Verhulst and Mona Sloane coined the term “AI localism” to describe how local governments have stepped up to regulate AI policies, design governance frameworks, and monitor AI use in the public sector. 

While top-level regulation remains scant, many municipalities have taken to addressing AI use in their communities. Today, The GovLab is proud to announce the soft launch of the AI Localism Repository. This living platform is a curated collection of AI localism initiatives across the globe categorized by geographic regions, types of technological and governmental innovation in AI regulation, mechanisms of governance, and sector focus. 

We invite visitors to explore this repository and learn more about the inventive measures cities are taking to control how, when, and why AI is being used by public authorities. We also welcome additional case study submissions, which can be sent to us via Google Form….(More)”

Privacy Tech’s Third Generation


“A Review of the Emerging Privacy Tech Sector” by Privacy Tech Alliance and Future of Privacy Forum: “As we enter the third phase of development of the privacy tech market, purchasers are demanding more integrated solutions, product offerings are more comprehensive, and startup valuations are higher than ever, according to a new report from the Future of Privacy Forum and Privacy Tech Alliance. These factors are leading to companies providing a wider range of services, acting as risk management platforms, and focusing on support of business outcomes.

According to the report, “Privacy Tech’s Third Generation: A Review of the Emerging Privacy Tech Sector,” regulations are often the biggest driver for buyers’ initial privacy tech purchases. Organizations also are deploying tools to mitigate potential harms from the use of data. However, buyers serving global markets increasingly need privacy tech that offers data availability and control and supports its utility, in addition to regulatory compliance. 

The report finds the COVID-19 pandemic has accelerated global marketplace adoption of privacy tech as dependence on digital technologies grows. Privacy is becoming a competitive differentiator in some sectors, and TechCrunch reports that 200+ privacy startups have together raised more than $3.5 billion over hundreds of individual rounds of funding….(More)”.

Privacy and Data Protection in Academia


Report by IAPP: “Today, demand for qualified privacy professionals is surging. Soon, societal, business and government needs for practitioners with expertise in the legal, technical and business underpinnings of data protection could far outstrip supply. To fill this gap, universities around the world are adding privacy curricula in their law, business and computer science schools. The IAPP’s Westin Research Center has catalogued these programs with the aim of promoting, catalyzing and supporting academia’s growing efforts to build an on-ramp to the privacy profession.

The information presented in our inaugural issue of “Privacy and Data Protection in Academia, A Global Guide to Curricula” represents the results of our publicly available survey. The programs included voluntarily completed the survey. The IAPP then organized the information provided and the designated contact at each institution verified the accu­racy of the information presented.

This is not a comprehen­sive list of colleges and universities offering privacy and data protection related curric­ula. We encourage higher education institu­tions interested in being included to com­plete the survey as the IAPP will periodically publish updates….(More)”.

Confronting Bias: BSA’s Framework to Build Trust in AI


BSA Software Alliance: “The Framework is a playbook organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. It can be leveraged by organizations that develop AI systems and companies that acquire and deploy such systems as the basis for:
– Internal Process Guidance. The Framework can be used as a tool for organizing and establishing roles,
responsibilities, and expectations for internal risk management processes.
– Training, Awareness, and Education. The Framework can be used to build internal training and education
programs for employees involved in developing and using AI systems, and for educating executives about
the organization’s approach to managing AI bias risks.
– Supply Chain Assurance and Accountability. AI developers and organizations that deploy AI
systems can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
– Trust and Confidence. The Framework can help organizations communicate information about a
product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the
Framework can help organizations communicate to the public about their commitment to building
ethical AI systems.
– Incident Response. Following an unexpected incident, the processes and documentation set forth
in the Framework can serve as an audit trail that can help organizations quickly diagnose and remediate
potential problems…(More)”