The NIST Trustworthy and Responsible Artificial Intelligence Resource Center


About: “The NIST Trustworthy and Responsible Artificial Intelligence Resource Center (AIRC) is a platform to support people and organizations in government, industry, and academia—both in the U.S. and internationally—driving technical and scientific innovation in AI. It serves as a one-stop-shop for foundational content, technical documents, and AI toolkits such as repository hub for standards, measurement methods and metrics, and data sets. It also provides a common forum for all AI actors to engage and collaborate in the development and deployment of trustworthy and responsible AI technologies that benefit all people in a fair and equitable manner.

The NIST AIRC is developed to support and operationalize the NIST AI Risk Management Framework (AI RMF 1.0) and its accompanying playbook. To match the complexity of AI technology, the AIRC will grow over time to provide an engaging interactive space that enables stakeholders to share AI RMF case studies and profiles, educational materials and technical guidance related to AI risk management.

The initial release of the AIRC (airc.nist.gov) provides access to the foundational content, including the AI RMF 1.0, the playbook, and a trustworthy and responsible AI glossary. It is anticipated that in the coming months enhancements to the AIRC will include structured access to relevant technical and policy documents; access to a standards hub that connects various standards promoted around the globe; a metrics hub to assist in test, evaluation, verification, and validation of AI; as well as software tools, resources and guidance that promote trustworthy and responsible AI development and use. Visitors to the AIRC will be able to tailor the above content they see based on their requirements (organizational role, area of expertise, etc.).

Over time the Trustworthy and Responsible AI Resource Center will enable distribution of stakeholder produced content, case studies, and educational materials…(More)”.

Outsourcing Virtue


Essay by  L. M. Sacasas: “To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.

Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:

Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.

This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it…(More)”.

Valuing the U.S. Data Economy Using Machine Learning and Online Job Postings


Paper by J Bayoán Santiago Calderón and Dylan Rassier: “With the recent proliferation of data collection and uses in the digital economy, the understanding and statistical treatment of data stocks and flows is of interest among compilers and users of national economic accounts. In this paper, we measure the value of own-account data stocks and flows for the U.S. business sector by summing the production costs of data-related activities implicit in occupations. Our method augments the traditional sum-of-costs methodology for measuring other own-account intellectual property products in national economic accounts by proxying occupation-level time-use factors using a machine learning model and the text of online job advertisements (Blackburn 2021). In our experimental estimates, we find that annual current-dollar investment in own-account data assets for the U.S. business sector grew from $84 billion in 2002 to $186 billion in 2021, with an average annual growth rate of 4.2 percent. Cumulative current-dollar investment for the period 2002–2021 was $2.6 trillion. In addition to the annual current-dollar investment, we present historical-cost net stocks, real growth rates, and effects on value-added by the industrial sector…(More)”.

Data Cooperatives as Catalysts for Collaboration, Data Sharing, and the (Trans)Formation of the Digital Commons


Paper by Michael Max Bühler et al: “Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship (SDG9), new skills, and jobs (SDG8), especially in small communities (SDG11) and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being (SDG3), and protect digital rights, we propose data cooperatives [1,2] as a vehicle for secure, trusted, and sovereign data exchange [3,4]. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized [5]. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted Application Programming Interfaces (APIs) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This policy paper presents and discusses several transformative use cases for cooperative data governance. The use cases demonstrate how platform/data-cooperatives, and their novel value creation can be leveraged to take digital commons and value chains to a new level of collaboration while addressing the most pressing community issues. The proposed framework for a digital federated and sovereign reference architecture will create a blueprint for sustainable development both in the Global South and North…(More)”

Knowledge monopolies and the innovation divide: A governance perspective


Paper by Hani Safadi and Richard Thomas Watson: “The rise of digital platforms creates knowledge monopolies that threaten innovation. Their power derives from the imposition of data obligations and persistent coupling on platform participation and their usurpation of the rights to data created by other participants to facilitate information asymmetries. Knowledge monopolies can use machine learning to develop competitive insights unavailable to every other platform participant. This information asymmetry stifles innovation, stokes the growth of the monopoly, and reinforces its ascendency. National or regional governance structures, such as laws and regulatory authorities, constrain economic monopolies deemed not in the public interest. We argue the need for legislation and an associated regulatory mechanism to curtail coercive data obligations, control, eliminate data rights exploitation, and prevent mergers and acquisitions that could create or extend knowledge monopolies…(More)”.

National Experimental Wellbeing Statistics (NEWS)


US Census: “The National Experimental Wellbeing Statistics (NEWS) project is a new experimental project to develop improved estimates of income, poverty, and other measures of economic wellbeing.  Using all available survey, administrative, and commercial data, we strive to provide the best possible estimates of our nation and economy.

In this first release, we estimate improved income and poverty statistics for 2018 by addressing several possible sources of bias documented in prior research.  We address biases from (1) unit nonresponse through improved weights, (2) missing income information in both survey and administrative data through improved imputation, and (3) misreporting by combining or replacing survey responses with administrative information.  Reducing survey error using these techniques substantially affects key measures of well-being.  With this initial set of experimental estimates, we estimate median household income is 6.3 percent higher than in survey estimates, and poverty is 1.1 percentage points lower. These changes are driven by subpopulations for which survey error is particularly relevant. For householders aged 65 and over, median household income is 27.3 percent higher, and poverty is 3.3 percentage points lower than in survey estimates. We do not find a significant impact on median household income for householders under 65 or on child poverty. 

We will continue research (1) to estimate income at smaller geographies, through increased use of American Community Survey data, (2) addressing other potential sources of bias, (3) releasing additional years of statistics, particularly more timely estimates, and (4) extending the income concepts measured.  As we advance the methods in future releases, we expect to revise these estimates…(More)”.

Data Reboot: 10 Reasons why we need to change how we approach data in today’s society


Article by Stefaan Verhulst and Julia Stamm:”…In the below, we consider 10 reasons why we need to reboot the data conversations and change our approach to data governance…

1. Data is not the new oil: This phrase, sometimes attributed to Clive Humby in 2006, has become a staple of media and other commentaries. In fact, the analogy is flawed in many ways. As Mathias Risse, from the Carr Center for Human Rights Policy at Harvard, points out, oil is scarce, fungible, and rivalrous (can be used and owned by a single entity). Data, by contrast, possesses none of these properties. In particular, as we explain further below, data is shareable (i.e., non-rivalrous); its societal and economic value also greatly increases through sharing. The data-as-oil analogy should thus be discarded, both because it is inaccurate and because it artificially inhibits the potential of data.

2. Not all data is equal: Assessing the value of data can be challenging, leading many organizations to treat (e.g., collect and store) all data equally. The value of data varies widely, however, depending on context, use case, and the underlying properties of the data (the information it contains, its quality, etc.). Establishing metrics or processes to accurately value data is therefore essential. This is particularly true as the amount of data continues to explode, potentially exceeding stakeholders’ ability to store or process all generated data.

3. Weighing Risks and Benefits of data use: Following a string of high-profile privacy violations in recent years, public and regulatory attention has largely focused on the risks associated with data, and steps required to minimize those risks. Such concerns are, of course, valid and important. At the same time, a sole focus on preventing harms has led to artificial limits on maximizing the potential benefits of data — or, put another way, on the risks of not using data. It is time to apply a more balanced approach, one that weighs risks against benefits. By freeing up large amounts of currently siloed and unused data, such a responsible data framework could unleash huge amounts of social innovation and public benefit….

7. From individual consent to a social license: Social license refers to the informal demands or expectations set by society on how data may be used, reused, and shared. The notion, which originates in the field of environmental resource management, recognizes that social license may not overlap perfectly with legal or regulatory license. In some cases, it may exceed formal approvals for how data can be used, and in others, it may be more limited. Either way, public trust is as essential as legal compliance — a thriving data ecology can only exist if data holders and other stakeholders operate within the boundaries of community norms and expectations.

8. From data ownership to data stewardship: Many of the above propositions add up to an implicit recognition that we need to move beyond notions of ownership when it comes to data. As a non-rivalrous public good, data offers massive potential for the public good and social transformation. That potential varies by context and use case; sharing and collaboration are essential to ensuring that the right data is brought to bear on the most relevant social problems. A notion of stewardship — which recognizes that data is held in public trust, available to be shared in a responsible manner — is thus more helpful (and socially beneficial) than outdated notions of ownership. A number of tools and mechanisms exist to encourage stewardship and sharing. As we have elsewhere written, data collaboratives are among the most promising.

9. Data Asymmetries: Data, it was often proclaimed, would be a harbinger of greater societal prosperity and well being. The era of big data was to usher in a new tide of innovation and economic growth that would lift all boats. The reality has been somewhat different. The era of big data has rather been characterized by persistent, and in many ways worsening, asymmetries. These manifest in inequalities in access to data itself, and, more problematically, inequalities in the way the social and economic fruits of data are being distributed. We thus need to reconceptualize our approach to data, ensuring that its benefits are more equitably spread, and that it does not in fact end up exacerbating the widespread and systematic inequalities that characterize our times.

10. Reconceptualizing self-determination…(More)” (First published as Data Reboot: 10 Gründe, warum wir unseren Umgang mit Daten ändern müssen at 1E9).

The Case for Including Data Stewardship in ESG


Article by Stefaan Verhulst: “Amid all the attention to environmental, social, and governance factors in investing, better known as ESG, there has been relatively little emphasis on governance, and even less on data governance. This is a significant oversight that needs to be addressed, as data governance has a crucial role to play in achieving environmental and social goals. 

Data stewardship in particular should be considered an important ESG practice. Making data accessible for reuse in the public interest can promote social and environmental goals while boosting a company’s efficiency and profitability. And investing in companies with data-stewardship capabilities makes good sense. But first, we need to move beyond current debates on data and ESG.

Several initiatives have begun to focus on data as it relates to ESG. For example, a recent McKinsey report on ESG governance within the banking sector argues that banks “will need to adjust their data architecture, define a data collection strategy, and reorganize their data governance model to successfully manage and report ESG data.” Deloitte recognizes the need for “a robust ESG data strategy.” PepsiCo likewise highlights its ESG Data Governance Program, and Maersk emphasizes data ethics as a key component in its ESG priorities.

These efforts are meaningful, but they are largely geared toward using data to measure compliance with environmental and social commitments. They don’t do much to help us understand how companies are leveraging data as an asset to achieve environmental and social goals. In particular, as I‘ve written elsewhere, data stewardship by which privately held data is reused for public interest purposes is an important new component of corporate social responsibility, as well as a key tool in data governance. Too many data-governance efforts are focused simply on using data to measure compliance or impact. We need to move beyond that mindset. Instead, we should adopt a data stewardship approach, where data is made accessible for the public good. There are promising signs of change in this direction…(More)”.

We need a much more sophisticated debate about AI


Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

How AI Could Revolutionize Diplomacy


Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.