Who’s Afraid of Big Numbers?


Aiyana Green and Steven Strogatz at the New York Times: “Billions” and “trillions” seem to be an inescapable part of our conversations these days, whether the subject is Jeff Bezos’s net worth or President Biden’s proposed budget. Yet nearly everyone has trouble making sense of such big numbers. Is there any way to get a feel for them? As it turns out, there is. If we can relate big numbers to something familiar, they start to feel much more tangible, almost palpable.

For example, consider Senator Bernie Sanders’s signature reference to “millionaires and billionaires.” Politics aside, are these levels of wealth really comparable? Intellectually, we all know that billionaires have a lot more money than millionaires do, but intuitively it’s hard to feel the difference, because most of us haven’t experienced what it’s like to have that much money.

In contrast, everyone knows what the passage of time feels like. So consider how long it would take for a million seconds to tick by. Do the math, and you’ll find that a million seconds is about 12 days. And a billion seconds? That’s about 32 years. Suddenly the vastness of the gulf between a million and a billion becomes obvious. A million seconds is a brief vacation; a billion seconds is a major fraction of a lifetime.

Comparisons to ordinary distances provide another way to make sense of big numbers. Here in Ithaca, we have a scale model of the solar system known as the Sagan Walk, in which all the planets and the gaps between them are reduced by a factor of five billion. At that scale, the sun becomes the size of a serving plate, Earth is a small pea and Jupiter is a brussels sprout. To walk from Earth to the sun takes just a few dozen footsteps, whereas Pluto is a 15-minute hike across town. Strolling through the solar system, you gain a visceral understanding of astronomical distances that you don’t get from looking at a book or visiting a planetarium. Your body grasps it even if your mind cannot….(More)”.

Why Business Schools Need to Teach Experimentation


Elizabeth R. Tenney, Elaine Costa, and Ruchi M. Watson at Harvard Business Review: “…The value of experiments in nonscientific organizations is quite high. Instead of calling in managers to solve every puzzle or dispute large and small (Should we make the background yellow or blue? Should we improve basic functionality or add new features? Are staff properly supported and incentivized to provide rapid responses?), teams can run experiments and measure outcomes of interest and, armed with new data, decide for themselves, or at least put forward a proposal grounded in relevant information. The data also provide tangible deliverables to show to stakeholders to demonstrate progress and accountability.

Experiments spur innovation. They can provide proof of concept and a degree of confidence in new ideas before taking bigger risks and scaling up. When done well, with data collected and interpreted objectively, experiments can also provide a corrective for faulty intuition, inaccurate assumptions, or overconfidence. The scientific method (which powers experiments) is the gold standard of tools to combat bias and answer questions objectively.

But as more and more companies are embracing a culture of experimentation, they face a major challenge: talent. Experiments are difficult to do well. Some challenges include special statistical knowledge, clear problem definition, and interpretation of the results. And it’s not enough to have the skillset. Experiments should ideally be done iteratively, building on prior knowledge and working toward deeper understanding of the question at hand. There are also the issues of managers’ preparedness to override their intuition when data disagree with it, and their ability to navigate hierarchy and bureaucracy to implement changes based on the experiments’ outcomes.

Some companies seem to be hiring small armies of PhDs to meet these competency challenges. (Amazon, for example, employs more than 100 PhD economists.) This isn’t surprising, given that PhDs receive years of training — and that the shrinking tenure-track market in academia has created a glut of PhDs. Other companies are developing employees in-house, training them in narrow, industry-specific methodologies. For example, General Mills recently hired for their innovator incubator group, called g-works, advertising for employees who are “using entrepreneurial skills and an experimental mindset” in what they called a “test and learn environment, with rapid experimentation to validate or invalidate assumptions.” Other companies — including Fidelity, LinkedIn, and Aetna — have hired consultants to conduct experiments, among them Irrational Labs, cofounded by Duke University’s Dan Ariely and the behavioral economist Kristen Berman….(More)”.

Scientific publishing’s new weapon for the next crisis: the rapid correction


Gideon Meyerowitz-Katz and James Heathers at STATNews: “If evidence of errors does emerge, the process for correcting or withdrawing a paper tends to be alarmingly long. Late last year, for example, David Cox, the IBM director of the MIT-IBM Watson AI Lab, discovered that his name was included as an author on two papers he had never written. After he wrote to the journals involved, it took almost three months for them to remove his name and the papers themselves. In cases of large-scale research fraud, correction times can be measured in years.

Imagine now that the issue with a manuscript is not a simple matter of retracting a fraudulent paper, but a more complex methodological or statistical problem that undercuts the study’s conclusions. In this context, requests for clarification — or retraction — can languish for years. The process can outlast both the tenure of the responsible editor, resetting the clock on the entire ordeal, or the journal itself can cease publication, leaving an erroneous article in the public domain without oversight, forever….

This situation must change, and change quickly. Any crisis that requires scientific information in a hurry will produce hurried science, and hurried science often includes miscalculated analyses, poor experimental design, inappropriate statistical models, impossible numbers, or even fraud. Having the agility to produce and publicize work like this without having the ability to correct it just as quickly is a curiously persistent oversight in the global scientific enterprise. If corrections occur only long after the research has already been used to treat people across the world, what use are they at all?

There are some small steps in the right direction. The open-source website PubPeer aggregates formal scientific criticism, and when shoddy research makes it into the literature, hordes of critics may leave comments and questions on the site within hours. Twitter, likewise, is often abuzz with spectacular scientific critiques almost as soon as studies go up online.

But these volunteer efforts are not enough. Even when errors are glaring and obvious, the median response from academic journals is to deal with them grudgingly or not at all. Academia in general takes a faintly disapproving tone of crowd-sourced error correction, ignoring the fact that it is often the only mechanism that exists to do this vital work.

Scientific publishing needs to stop treating error-checking as a slightly inconvenient side note and make it a core part of academic research. In a perfect world, entire departmental sections would be dedicated to making sure that published research is correct and reliable. But even a few positions would be a fine start. Young researchers could be given kudos not just for every citation in their Google scholar profile but also for every post-publication review they undertake….(More)”

When Graphs Are a Matter of Life and Death


Essay by  Hannah Fry at the NewYorker: “John Carter has only an hour to decide. The most important auto race of the season is looming; it will be broadcast live on national television and could bring major prize money. If his team wins, it will get a sponsorship deal and a chance to start making some real profits for a change.

There’s just one problem. In seven of the past twenty-four races, the engine in the Carter Racing car has blown out. An engine failure live on TV will jeopardize sponsorships—and the driver’s life. But withdrawing has consequences, too. The wasted entry fee means finishing the season in debt, and the team won’t be happy about the missed opportunity for glory. As Burns’s First Law of Racing says, “Nobody ever won a race sitting in the pits.”

One of the engine mechanics has a hunch about what’s causing the blowouts. He thinks that the engine’s head gasket might be breaking in cooler weather. To help Carter decide what to do, a graph is devised that shows the conditions during each of the blowouts: the outdoor temperature at the time of the race plotted against the number of breaks in the head gasket. The dots are scattered into a sort of crooked smile across a range of temperatures from about fifty-five degrees to seventy-five degrees.

When Graphs Are a Matter of Life and Death

The upcoming race is forecast to be especially cold, just forty degrees, well below anything the cars have experienced before. So: race or withdraw?

This case study, based on real data, and devised by a pair of clever business professors, has been shown to students around the world for more than three decades. Most groups presented with the Carter Racing story look at the scattered dots on the graph and decide that the relationship between temperature and engine failure is inconclusive. Almost everyone chooses to race. Almost no one looks at that chart and asks to see the seventeen missing data points—the data from those races which did not end in engine failure.

Image may contain Plot

As soon as those points are added, however, the terrible risk of a cold race becomes clear. Every race in which the engine behaved properly was conducted when the temperature was higher than sixty-five degrees; every single attempt that occurred in temperatures at or below sixty-five degrees resulted in engine failure. Tomorrow’s race would almost certainly end in catastrophe.

One more twist: the points on the graph are real but have nothing to do with auto racing. The first graph contains data compiled the evening before the disastrous launch of the space shuttle Challenger, in 1986….(More)”.

Cultivating an Inclusive Culture Through Personal Networks


Essay by Rob Cross, Kevin Oakes, and Connor Cross: “Many organizations have ramped up their investments in diversity, equity, and inclusion — largely in the form of anti-bias training, employee resource groups, mentoring programs, and added DEI functions and roles. But gauging the effectiveness of these measures has been a challenge….

We’re finding that organizations can get a clearer picture of employee experience by analyzing people’s network connections. They can begin to see whether DEI programs are producing the collaboration and interactions needed to help people from various demographic groups gain their footing quickly and become truly integrated.

In particular, network analysis reveals when and why people seek out individuals for information, ideas, career advice, personal support, or mentorship. In the Connected Commons, a research consortium, we have mapped organizational networks for over 20 years and have frequently been able to overlay gender data on network diagrams to identify drivers of inclusion. Extensive quantitative and qualitative research on this front has helped us understand behaviors that promote more rapid and effective integration of women after they are hired. For example, research reveals the importance of fostering collaboration across functional and geographic divides (while avoiding collaborative burnout) and cultivating energy through network connections….(More)”

Examining the Intersection of Behavioral Science and Advocacy


Introduction to Special Collection of the Behavioral Scientist by Cintia Hinojosa and Evan Nesterak: “Over the past year, everyone’s lives have been touched by issues that intersect science and advocacy—the pandemic, climate change, police violence, voting, protests, the list goes on. 

These issues compel us, as a society and individuals, toward understanding. We collect new data, design experiments, test our theories. They also inspire us to examine our personal beliefs and values, our roles and responsibilities as individuals within society. 

Perhaps no one feels these forces more than social and behavioral scientists. As members of fields dedicated to the study of social and behavioral phenomena, they are in the unique position of understanding these issues from a scientific perspective, while also navigating their inevitable personal impact. This dynamic brings up questions about the role of scientists in a changing world. To what extent should they engage in advocacy or activism on social and political issues? Should they be impartial investigators, active advocates, something in between? 

t also raises other questions, like does taking a public stance on an issue affect scientific integrity? How should scientists interact with those setting policies? What happens when the lines between an evidence-based stance and a political position become blurred? What should scientists do when science itself becomes a partisan issue? 

To learn more about how social and behavioral scientists are navigating this terrain, we put out a call inviting them to share their ideas, observations, personal reflections, and the questions they’re grappling with. We gave them 100-250 words to share what was on their mind. Not easy for such a complex and consequential topic.

The responses, collected and curated below, revealed a number of themes, which we’ve organized into two parts….(More)”.

Is It Time for a U.S. Department of Science?



Essay by Anthony Mills: “The Biden administration made history earlier this year by elevating the director of the Office of Science and Technology Policy to a cabinet-level post. There have long been science advisory bodies within the White House, and there are a number of executive agencies that deal with science, some of them cabinet-level. But this will be the first time in U.S. history that the president’s science advisor will be part of his cabinet.

It is a welcome effort to restore the integrity of science, at a moment when science has been thrust onto the center-stage of public life — as something indispensable to political decision-making as well as a source of controversy and distrust. Some have urged the administration to go even further, calling for the creation of a new federal department of science. Such calls to centralize science have a long history, and have grown louder during the coronavirus pandemic, spurred by our government’s haphazard response.

But more centralization is not the way to restore the integrity of science. Centralization has its place, especially during national emergencies. Too much of it, however, is bad for science. As a rule, science flourishes in a decentralized research environment, which balances the need for public support, effective organization, and political accountability with scientific independence and institutional diversity. The Biden administration’s move is welcome. But there is risk in what it could lead to next: an American Ministry of Science. And there is an opportunity to create a needed alternative….(More)”.

How volunteer observers can help protect biodiversity


The Economist: “Ecology lends itself to being helped along by the keen layperson perhaps more than any other science. For decades, birdwatchers have recorded their sightings and sent them to organisations like Britain’s Royal Society for the Protection of Birds, or the Audubon society in America, contributing precious data about population size, trends, behaviour and migration. These days, any smartphone connected to the internet can be pointed at a plant to identify a species and add a record to a regional data set.

Social-media platforms have further transformed things, adding big data to weekend ecology. In 2002, the Cornell Lab of Ornithology in New York created eBird, a free app available in more than 30 languages that lets twitchers upload and share pictures and recordings of birds, labelled by time, location and other criteria. More than 100m sightings are now uploaded annually, and the number is growing by 20% each year. In May the group marked its billionth observation. The Cornell group also runs an audio library with 1m bird calls, and the Merlin app, which uses eBird data to identify species from pictures and descriptions….(More)”.

What Data About You Can the Government Get From Big Tech?


 Jack Nicas at the New York Times: “The Justice Department, starting in the early days of the Trump administration, secretly sought data from some of the biggest tech companies about journalistsDemocratic lawmakers and White House officials as part of wide-ranging investigations into leaks and other matters, The New York Times reported last week.

The revelations, which put the companies in the middle of a clash over the Trump administration’s efforts to find the sources of news coverage, raised questions about what sorts of data tech companies collect on their users, and how much of it is accessible to law enforcement authorities.

Here’s a rundown:

All sorts. Beyond basic data like users’ names, addresses and contact information, tech companies like Google, Apple, Microsoft and Facebook also often have access to the contents of their users’ emails, text messages, call logs, photos, videos, documents, contact lists and calendars.

Most of it is. But which data law enforcement can get depends on the sort of request they make.

Perhaps the most common and basic request is a subpoena. U.S. government agencies and prosecutors can often issue subpoenas without approval from a judge, and lawyers can issue them as part of open court cases. Subpoenas are often used to cast a wide net for basic information that can help build a case and provide evidence needed to issue more powerful requests….(More)”.

Digitalization as a common good. Contribution to an inclusive recovery


Essay by Julia Pomares, Andrés Ortega & María Belén Abdala: “…The pandemic has accelerated the urgency of a new social contract for this era at national, regional, and global levels, and such a pact clearly requires a digital dimension. The Spanish government, for example, proposes that by 2025, 100 megabits per second should be achieved for 100% of the population. A company like Telefónica, for its part, proposes a “Digital Deal to build back better our societies and economies” to achieve a “fair and inclusive digital transition,” both for Spain and Latin America.

The pandemic and the way of coping with and overcoming it has also emphasized and aggravated the significance of different types of digital and connectivity gaps and divides, between countries and regions of the world, between rural and urban areas, between social groups, including income and gender-related gaps, and between companies (large and small), which need to be addressed and bridged in these new social digital contracts. For the combination of digital divides and the pandemic amplify social disparities and inequalities in various spheres of life. Digitalization can contribute to enlarge those divides, but also to overcome them.

Common good

In 2016, the UN, through its Human Rights Council and General Assembly, qualified access to the internet as a basic fundamental human right, from which all human rights can also be defended. In 2021, the Italian Presidency of the G20 has set universal access to the internet as a goal of the group.

We use the concept of common good, in a non-legal but economic sense, following Nobel Laureate Elinor Ostrom 6 who refers to the nature of use and not of ownership. In line with Ostrom, digitalization and connectivity as a common good respond to three characteristics:

  • It is non-rivalrous: Its consumption by anyone does not reduce the amount available to others (which in digitalization and connectivity is true to a certain extent, since it also relies on huge but limited storage and processing centers, and also on network capacity, both in the access and backbone network. It is the definition of service, where a distinction has to be made between the content of what is transmitted, and the medium used.)
  • It is non-excludable: It is almost impossible to prevent anyone from consuming it.
  • It is available, more or less, all over the world….(More)”.