☄️ Why We Misjudge the Biggest Threats to Humanity
New research reveals how people systematically misallocate resources when faced with catastrophic risks
Some decisions are so important that getting them wrong can jeopardize the future of our entire species. Think asteroid collisions, deadly pandemics, nuclear war, and runaway AI. These kinds of existential risks vary in their probability, predictability, and controllability, but all of them are capable of producing catastrophic consequences.
We like to be hopeful in assuming that we can assess these kinds of risks rationally as they happen and take the best course of action in anticipating or preventing them. After all, the human species has already made it this far, right?
However, it’s worth remembering that humans have not been around a particularly long time in the grander scheme of life on Earth, and when it comes to rare but disastrous events, we don’t really have any room for error. One of our biggest problems according to new research is that our intuitions around how to prepare for disaster scenarios may be dangerously miscalibrated.
Scientists from Princeton have revealed that people use the wrong mental models when deciding which disaster interventions should be prioritized. Instead of treating life-or-death trade-offs as high-stakes choices with existential consequences, we approach them more like ordinary, everyday decisions.
💰 How we should allocate resources to save humanity
Unlike many of our everyday decisions in which gains and risks are additive, existential risk is governed by multiplicative reasoning. When you're facing multiple independent existential risks, the probability that humanity survives all of them is not the sum of the individual survival probabilities—it’s the product.
Why? Because each of these risks must be avoided in order for humanity to survive. Just one failure means game over.
An example of a less existential problem space would be keeping our home in good shape: we might attend to a leaky roof, consider a new water heater, work on unclogging the drains, etc. Each of these contributes to minimizing the total potential harm to our household and the benefits of each intervention simply stack on top of each other.
On the other hand, if we’re talking about surviving possible extinction events, we’re less interested in lowering total harm and more interested in minimizing the chance that any individual existential disaster happens. If any one event happens, the other events no longer matter since we’re already dead. Therefore, we need to worry about the product of the probabilities of surviving each disaster event. So if we have a 90% chance of surviving each of three disaster events (e.g. rogue AI, a pandemic, and an asteroid impact), the chances that we survive all of them would be 72.9% (0.9*0.9*0.9), and we want to raise that final number as high as possible.
Using this principle in a recent study, researchers created a model for how to optimally divide a fixed intervention budget across different risk types, each defined by:
A baseline survival probability (how likely we are to survive a particular disaster event without investment)
An addressability score (how responsive the risk is to our investments)
Their mathematical modeling showed that risks with low survival and high addressability deserve disproportionately more attention. It’s better to lift a very low survival probability by a few points than to double a risk that’s already low.
But do people intuitively follow this strategy?
To find out, the researchers recruited 783 participants and gave them a clear task: imagine you’re the head of an organization trying to prevent human extinction. You have $100 million to distribute across three existential threats. How would you allocate your funds?
Each threat came with its own survival baseline and addressability score. The goal was to minimize the chance that any of the three disasters occur.
The results were clear:
People spread their money too evenly: Instead of focusing on the most dangerous and addressable risks, participants favored naive “fairness,” dividing funds more equally across disaster events than was optimal.
People appropriately prioritized addressability but not survival: participants generally invested more in risks where their money would go further (a good instinct), but they didn’t sufficiently weight the probabilities of survival.
In a second experiment, the researchers wanted to know whether more detailed task instructions would help people reach more optimal decisions. They asked over 1,200 new participants to solve the same allocation problems, but with modified instructions that clarified survival probabilities and how investments directly impacted each scenario. Despite these clearer framings, people still didn’t allocate resources optimally.
Instead, participants were relying on simpler heuristics for reasoning informally such as “invest more when a risk is more fixable”. It highlights a dangerous blind spot in how we naturally evaluate risks even when we have all the right information in front of us.
We struggle to intuitively reason about multiplicative threats in which one failure means the end of humanity. The researchers suggest this problem appears when we observe odds individually for a list of possible disasters and mentally try to combine them. As they put it in their paper: “while people are relatively good at choosing individual items for their grocery carts given each item’s price per ounce (a linear allocation problem), people are less good at the nonlinear allocation problem of choosing individual existential risk interventions based on each intervention’s risk reduction per dollar spent”.
Presenting decision-makers with overall risk profiles and the total effects of multiple investments may be one way around this. Instead of narrowing attention on individual problems, we can talk about how different decisions impact the general landscape of compounding vulnerabilities for issues like economic stability, emergency planning, vaccine deployment, AI preparedness, nuclear safety, and so on.
If we want to improve public decision-making, we need to design tools, interfaces, and education systems that compensate for our cognitive blind spots, especially when the stakes are at their highest.
⭐️ Takeaway tips
#1. Don’t assume equal means fair in the world of existential risk
Spreading resources equally across threats may feel like a balanced investment strategy, but when survival depends on multiplying different probabilities, this strategy can backfire. The problems that deserve the most attention are those that are both the most addressable and the most threatening.
#2. Avoid over-focusing on what's easy to fix
We tend to over-focus on the most addressable issues, probably because we love the feeling of making smooth progress. But real improvement often lies in facing the challenges that feel tough or uncertain. This applies as much to our daily lives as it does to disaster preparedness, so try allocating some energy this week to areas of your life you usually avoid. If you’ve been neglecting any priorities such as social connection, physical health, self-care, or financial security, consider whether you can invest more time in that area over the next few days.
#3. Consider whole strategies, not just individual choices
We all have some sociopolitical issues we care about more than others because they lie closer to our hearts in some way. It could be public health, environmental issues, or threats to global security. But sometimes, our narrow focus can draw our attention away from the bigger and more meaningful picture. Instead of evaluating one risk at a time, we may be better off considering the combined impact of various risk intervention strategies. This shift in framing can help align our decisions with long-term safety and security.
“Dread of disaster makes everybody act in the very way that increases the disaster.”
~ Bertrand Russell
Nice work here Erman,
Humans are just plainly not very good with numbers. When it comes to judging the probability of events, our perception is extremely skewed.
We also fall victim to a number of fallacies, as I noted:
Deaths caused by horrific accidents, homicide, or wars, for instance, are erroneously seen as more probable than deaths by diabetes or poor diet because media coverage favors the former. Horrific accidents draw more eyeballs, triggering more reporting, and thus are more “available” than they otherwise should be, so our brains assume they are more common than they truly are.
This fact also shapes how we see the future:
When our brains compare the past with the present, it compares a negative portrayal of the present in sharp relief with a rose-colored version of the past. Our minds convince us that the present is terrible and the past was better. This creates the illusion of a downward trajectory and leads us to anxiously fear the future.