Realtime Climate


Climate Central …:”launched this tool to help meteorologists and journalists cover connections between weather, news, and climate in real time, and to alert public and private organizations and individuals about particular local conditions related to climate change, its impacts, or its solutions.

Realtime Climate monitors local weather and events across the U.S. and generates alerts when certain conditions are met or expected. These alerts provide links to science-based analyses and visualizations—including locality-specific, high-quality graphics—that can help explain events in the context of climate change….

Alerts are sent when particular conditions occur or are forecast to occur in the next few days. Examples include:

  • Unusual heat (single day and multi-day)
  • Heat Index
  • Unusual Rainfall
  • Coastal Flooding
  • Air Quality
  • Allergies
  • Seasonal shifts (spring leaf-out, etc.)
  • Ice/snow cover (Great Lakes)
  • Cicadas
  • High local or regional production of solar or wind energy

More conditions will be added soon, including:

  • Drought
  • Wildfire
  • and many more…(More)”.

Who’s Afraid of Big Numbers?


Aiyana Green and Steven Strogatz at the New York Times: “Billions” and “trillions” seem to be an inescapable part of our conversations these days, whether the subject is Jeff Bezos’s net worth or President Biden’s proposed budget. Yet nearly everyone has trouble making sense of such big numbers. Is there any way to get a feel for them? As it turns out, there is. If we can relate big numbers to something familiar, they start to feel much more tangible, almost palpable.

For example, consider Senator Bernie Sanders’s signature reference to “millionaires and billionaires.” Politics aside, are these levels of wealth really comparable? Intellectually, we all know that billionaires have a lot more money than millionaires do, but intuitively it’s hard to feel the difference, because most of us haven’t experienced what it’s like to have that much money.

In contrast, everyone knows what the passage of time feels like. So consider how long it would take for a million seconds to tick by. Do the math, and you’ll find that a million seconds is about 12 days. And a billion seconds? That’s about 32 years. Suddenly the vastness of the gulf between a million and a billion becomes obvious. A million seconds is a brief vacation; a billion seconds is a major fraction of a lifetime.

Comparisons to ordinary distances provide another way to make sense of big numbers. Here in Ithaca, we have a scale model of the solar system known as the Sagan Walk, in which all the planets and the gaps between them are reduced by a factor of five billion. At that scale, the sun becomes the size of a serving plate, Earth is a small pea and Jupiter is a brussels sprout. To walk from Earth to the sun takes just a few dozen footsteps, whereas Pluto is a 15-minute hike across town. Strolling through the solar system, you gain a visceral understanding of astronomical distances that you don’t get from looking at a book or visiting a planetarium. Your body grasps it even if your mind cannot….(More)”.

Why Business Schools Need to Teach Experimentation


Elizabeth R. Tenney, Elaine Costa, and Ruchi M. Watson at Harvard Business Review: “…The value of experiments in nonscientific organizations is quite high. Instead of calling in managers to solve every puzzle or dispute large and small (Should we make the background yellow or blue? Should we improve basic functionality or add new features? Are staff properly supported and incentivized to provide rapid responses?), teams can run experiments and measure outcomes of interest and, armed with new data, decide for themselves, or at least put forward a proposal grounded in relevant information. The data also provide tangible deliverables to show to stakeholders to demonstrate progress and accountability.

Experiments spur innovation. They can provide proof of concept and a degree of confidence in new ideas before taking bigger risks and scaling up. When done well, with data collected and interpreted objectively, experiments can also provide a corrective for faulty intuition, inaccurate assumptions, or overconfidence. The scientific method (which powers experiments) is the gold standard of tools to combat bias and answer questions objectively.

But as more and more companies are embracing a culture of experimentation, they face a major challenge: talent. Experiments are difficult to do well. Some challenges include special statistical knowledge, clear problem definition, and interpretation of the results. And it’s not enough to have the skillset. Experiments should ideally be done iteratively, building on prior knowledge and working toward deeper understanding of the question at hand. There are also the issues of managers’ preparedness to override their intuition when data disagree with it, and their ability to navigate hierarchy and bureaucracy to implement changes based on the experiments’ outcomes.

Some companies seem to be hiring small armies of PhDs to meet these competency challenges. (Amazon, for example, employs more than 100 PhD economists.) This isn’t surprising, given that PhDs receive years of training — and that the shrinking tenure-track market in academia has created a glut of PhDs. Other companies are developing employees in-house, training them in narrow, industry-specific methodologies. For example, General Mills recently hired for their innovator incubator group, called g-works, advertising for employees who are “using entrepreneurial skills and an experimental mindset” in what they called a “test and learn environment, with rapid experimentation to validate or invalidate assumptions.” Other companies — including Fidelity, LinkedIn, and Aetna — have hired consultants to conduct experiments, among them Irrational Labs, cofounded by Duke University’s Dan Ariely and the behavioral economist Kristen Berman….(More)”.

Moving up: Promoting workers’ upward mobility using network analysis


Report by Marcela Escobari, Ian Seyal and Carlos Daboin Contreras: “The U.S. economy faces a mobility crisis. After decades of rising inequality, stagnating wages, and a shrinking middle class, many American workers find it harder and harder to get ahead. COVID-19 accentuated a stark divide, battering a two-tiered labor force with millions of low-wage workers lacking job security and benefits—as the long-term trends of globalization, digitalization, and automation continue to displace jobs and disrupt career paths.

To address this crisis and create an economy that works for everyone, policymakers and business leaders must act boldly and urgently. But the challenge of low mobility is complex and driven by many factors, with significant heterogeneity across regions, sectors, and demographic groups. When diagnostics fail to disentangle the complexity, our standard policy responses—centered on education, reskilling, and other reemployment services to help workers adapt—fall short.

This report offers a new approach to better understand the contours of mobility: Who is falling behind, where, and by how much. Using data on hundreds of thousands of real workers’ occupational transitions, we use network analysis to create a multidimensional map of the labor market, revealing a landscape riddled with mobility gaps and barriers. Workers in low-wage occupations face particular hurdles, and persistent racial and gender disparities hold some workers back more than others.

Even so, many workers travel on pathways to economic mobility. By showing where existing pathways can be expanded and where new ones are needed, this report helps policymakers, community organizations, higher education institutions, and business leaders better understand the challenge of mobility and see where and how to intervene, in order to help more workers move up faster….(More)”.

Serving the Citizens—Not the Bureaucracy


Report by Sascha Haselmayer: “In a volatile and changing world, one government function is in a position to address challenges ranging from climate change to equity to local development: procurement. Too long confined to a mission of cost savings and compliance, procurement—particularly at the local level, where decisions have a real and immediate impact on citizens—has the potential to become a significant catalyst of change.

In 2021 alone, cities around the globe will spend an estimated $6.4 trillion, or 8 percent of GDP, on procurement.1 Despite this vast buying power, city procurement faces several challenges, including resistance to the idea that procurement can be creative, strategic, economically formidable—and even an affirming experience for professional staff, citizens, civil society organizations, and other stakeholders.

Unfortunately, city procurement is far from ready to overcome these hurdles. Interviews with city leaders and procurement experts point to a common failing: city procurement today is structured to serve bureaucracies—not citizens.

City procurement is in a state of creative tension. Leaders want it to be a creative engine for change, but they underfund procurement teams and foster a compliance culture that leaves no room for much-needed creative and critical thinking. In short: procurement needs a mission.

In this report, we propose cities reimagine procurement as a public service, which can unlock a world of ideas for change and improvement. The vision presented in this report is based on six strategic measures that can help cities get started. The path forward involves not only taking concrete actions, such as reducing barriers to participation of diverse suppliers, but also adopting a new mindset about the purpose and potential of procurement. By doing so, cities can reduce costs and develop creative, engaging solutions to citywide problems. We also offer detailed insights, ideas, and best practices for how practitioners can realize this new vision.

Better city procurement offers the promise of a vast return on investment. Cost savings stand to exceed 15 percent across the board, and local development may benefit by multiplying the participation of small and disadvantaged businesses. Clarity of mission and the required professional skills can lead to new, pioneering innovations. Technology and the right data can lead to sustained performance and better outcomes. A healthy supplier ecosystem can deliver new supplier talent that is aligned with the goals of the city to reduce carbon emissions, serve complex needs, and diversify the supply chain.

All of this not in service of the bureaucracy but of the citizen….(More)”.

Cultivating an Inclusive Culture Through Personal Networks


Essay by Rob Cross, Kevin Oakes, and Connor Cross: “Many organizations have ramped up their investments in diversity, equity, and inclusion — largely in the form of anti-bias training, employee resource groups, mentoring programs, and added DEI functions and roles. But gauging the effectiveness of these measures has been a challenge….

We’re finding that organizations can get a clearer picture of employee experience by analyzing people’s network connections. They can begin to see whether DEI programs are producing the collaboration and interactions needed to help people from various demographic groups gain their footing quickly and become truly integrated.

In particular, network analysis reveals when and why people seek out individuals for information, ideas, career advice, personal support, or mentorship. In the Connected Commons, a research consortium, we have mapped organizational networks for over 20 years and have frequently been able to overlay gender data on network diagrams to identify drivers of inclusion. Extensive quantitative and qualitative research on this front has helped us understand behaviors that promote more rapid and effective integration of women after they are hired. For example, research reveals the importance of fostering collaboration across functional and geographic divides (while avoiding collaborative burnout) and cultivating energy through network connections….(More)”

Is It Time for a U.S. Department of Science?



Essay by Anthony Mills: “The Biden administration made history earlier this year by elevating the director of the Office of Science and Technology Policy to a cabinet-level post. There have long been science advisory bodies within the White House, and there are a number of executive agencies that deal with science, some of them cabinet-level. But this will be the first time in U.S. history that the president’s science advisor will be part of his cabinet.

It is a welcome effort to restore the integrity of science, at a moment when science has been thrust onto the center-stage of public life — as something indispensable to political decision-making as well as a source of controversy and distrust. Some have urged the administration to go even further, calling for the creation of a new federal department of science. Such calls to centralize science have a long history, and have grown louder during the coronavirus pandemic, spurred by our government’s haphazard response.

But more centralization is not the way to restore the integrity of science. Centralization has its place, especially during national emergencies. Too much of it, however, is bad for science. As a rule, science flourishes in a decentralized research environment, which balances the need for public support, effective organization, and political accountability with scientific independence and institutional diversity. The Biden administration’s move is welcome. But there is risk in what it could lead to next: an American Ministry of Science. And there is an opportunity to create a needed alternative….(More)”.

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems


Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.

Mass, Computer-Generated, and Fraudulent Comments


Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.

How volunteer observers can help protect biodiversity


The Economist: “Ecology lends itself to being helped along by the keen layperson perhaps more than any other science. For decades, birdwatchers have recorded their sightings and sent them to organisations like Britain’s Royal Society for the Protection of Birds, or the Audubon society in America, contributing precious data about population size, trends, behaviour and migration. These days, any smartphone connected to the internet can be pointed at a plant to identify a species and add a record to a regional data set.

Social-media platforms have further transformed things, adding big data to weekend ecology. In 2002, the Cornell Lab of Ornithology in New York created eBird, a free app available in more than 30 languages that lets twitchers upload and share pictures and recordings of birds, labelled by time, location and other criteria. More than 100m sightings are now uploaded annually, and the number is growing by 20% each year. In May the group marked its billionth observation. The Cornell group also runs an audio library with 1m bird calls, and the Merlin app, which uses eBird data to identify species from pictures and descriptions….(More)”.