Cultivating an Inclusive Culture Through Personal Networks


Essay by Rob Cross, Kevin Oakes, and Connor Cross: “Many organizations have ramped up their investments in diversity, equity, and inclusion — largely in the form of anti-bias training, employee resource groups, mentoring programs, and added DEI functions and roles. But gauging the effectiveness of these measures has been a challenge….

We’re finding that organizations can get a clearer picture of employee experience by analyzing people’s network connections. They can begin to see whether DEI programs are producing the collaboration and interactions needed to help people from various demographic groups gain their footing quickly and become truly integrated.

In particular, network analysis reveals when and why people seek out individuals for information, ideas, career advice, personal support, or mentorship. In the Connected Commons, a research consortium, we have mapped organizational networks for over 20 years and have frequently been able to overlay gender data on network diagrams to identify drivers of inclusion. Extensive quantitative and qualitative research on this front has helped us understand behaviors that promote more rapid and effective integration of women after they are hired. For example, research reveals the importance of fostering collaboration across functional and geographic divides (while avoiding collaborative burnout) and cultivating energy through network connections….(More)”

Is It Time for a U.S. Department of Science?



Essay by Anthony Mills: “The Biden administration made history earlier this year by elevating the director of the Office of Science and Technology Policy to a cabinet-level post. There have long been science advisory bodies within the White House, and there are a number of executive agencies that deal with science, some of them cabinet-level. But this will be the first time in U.S. history that the president’s science advisor will be part of his cabinet.

It is a welcome effort to restore the integrity of science, at a moment when science has been thrust onto the center-stage of public life — as something indispensable to political decision-making as well as a source of controversy and distrust. Some have urged the administration to go even further, calling for the creation of a new federal department of science. Such calls to centralize science have a long history, and have grown louder during the coronavirus pandemic, spurred by our government’s haphazard response.

But more centralization is not the way to restore the integrity of science. Centralization has its place, especially during national emergencies. Too much of it, however, is bad for science. As a rule, science flourishes in a decentralized research environment, which balances the need for public support, effective organization, and political accountability with scientific independence and institutional diversity. The Biden administration’s move is welcome. But there is risk in what it could lead to next: an American Ministry of Science. And there is an opportunity to create a needed alternative….(More)”.

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems


Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.

Mass, Computer-Generated, and Fraudulent Comments


Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.

How volunteer observers can help protect biodiversity


The Economist: “Ecology lends itself to being helped along by the keen layperson perhaps more than any other science. For decades, birdwatchers have recorded their sightings and sent them to organisations like Britain’s Royal Society for the Protection of Birds, or the Audubon society in America, contributing precious data about population size, trends, behaviour and migration. These days, any smartphone connected to the internet can be pointed at a plant to identify a species and add a record to a regional data set.

Social-media platforms have further transformed things, adding big data to weekend ecology. In 2002, the Cornell Lab of Ornithology in New York created eBird, a free app available in more than 30 languages that lets twitchers upload and share pictures and recordings of birds, labelled by time, location and other criteria. More than 100m sightings are now uploaded annually, and the number is growing by 20% each year. In May the group marked its billionth observation. The Cornell group also runs an audio library with 1m bird calls, and the Merlin app, which uses eBird data to identify species from pictures and descriptions….(More)”.

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


Report by Pew Research Center: “Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk.

They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people’s newsfeeds and video choices.

They recognize people’s facestranslate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music that sounds quite like the Beatles and Bach.

Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer.

As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030….(More)”

Crisis Innovation Policy from World War II to COVID-19


Paper by Daniel P. Gross & Bhaven N. Sampat: “Innovation policy can be a crucial component of governments’ responses to crises. Because speed is a paramount objective, crisis innovation may also require different policy tools than innovation policy in non-crisis times, raising distinct questions and tradeoffs. In this paper, we survey the U.S. policy response to two crises where innovation was crucial to a resolution: World War II and the COVID-19 pandemic. After providing an overview of the main elements of each of these efforts, we discuss how they compare, and to what degree their differences reflect the nature of the central innovation policy problems and the maturity of the U.S. innovation system. We then explore four key tradeoffs for crisis innovation policy—top-down vs. bottom-up priority setting, concentrated vs. distributed funding, patent policy, and managing disruptions to the innovation system—and provide a logic for policy choices. Finally, we describe the longer-run impacts of the World War II effort and use these lessons to speculate on the potential long-run effects of the COVID-19 crisis on innovation policy and the innovation system….(More)”.

Bridging the global digital divide: A platform to advance digital development in low- and middle-income countries


Paper by George Ingram: “The world is in the midst of a fast-moving, Fourth Industrial Revolution (also known as 4IR or Industry 4.0), driven by digital innovation in the use of data, information, and technology. This revolution is affecting everything from how we communicate, to where and how we work, to education and health, to politics and governance. COVID-19 has accelerated this transformation as individuals, companies, communities, and governments move to virtual engagement. We are still discovering the advantages and disadvantages of a digital world.

This paper outlines an initiative that would allow the United States, along with a range of public and private partners, to seize the opportunity to reduce the digital divide between nations and people in a way that benefits inclusive economic advancement in low- and middle-income countries, while also advancing the economic and strategic interests of the United States and its partner countries.

As life increasingly revolves around digital technologies and innovation, countries are in a race to digitalize at a speed that threatens to leave behind the less advantaged—countries and underserved groups. Data in this paper documents the scope of the digital divide. With the Sustainable Development Goals (SDGs), the world committed to reduce poverty and advance all aspects of the livelihood of nations and people. Countries that fail to progress along the path to 5G broadband cellular networks will be unable to unlock the benefits of the digital revolution and be left behind. Donors are recognizing this and offering solutions, but in a one-off, disconnected fashion. Absent a comprehensive, partnership approach, that takes advantage of the comparative advantage of each, these well-intended efforts will not aggregate to the scale and speed required by the challenge….(More)”.

What Data About You Can the Government Get From Big Tech?


 Jack Nicas at the New York Times: “The Justice Department, starting in the early days of the Trump administration, secretly sought data from some of the biggest tech companies about journalistsDemocratic lawmakers and White House officials as part of wide-ranging investigations into leaks and other matters, The New York Times reported last week.

The revelations, which put the companies in the middle of a clash over the Trump administration’s efforts to find the sources of news coverage, raised questions about what sorts of data tech companies collect on their users, and how much of it is accessible to law enforcement authorities.

Here’s a rundown:

All sorts. Beyond basic data like users’ names, addresses and contact information, tech companies like Google, Apple, Microsoft and Facebook also often have access to the contents of their users’ emails, text messages, call logs, photos, videos, documents, contact lists and calendars.

Most of it is. But which data law enforcement can get depends on the sort of request they make.

Perhaps the most common and basic request is a subpoena. U.S. government agencies and prosecutors can often issue subpoenas without approval from a judge, and lawyers can issue them as part of open court cases. Subpoenas are often used to cast a wide net for basic information that can help build a case and provide evidence needed to issue more powerful requests….(More)”.

Be Skeptical of Thought Leaders


Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.