Uncertainty isn’t an excuse not to work on existential risk
The future can be hard to predict – that doesn’t mean we should give up on trying to improve it.
Longtermism is, roughly speaking, the view that we should prioritize actions that will improve the future over more contemporary concerns. (You can find a more detailed explanation of the concept here).
One critique of Longtermist thinking that I hear pretty frequently goes something like this:
“The future is so uncertain that the whole business of trying to systematically improve it is misguided. We just can’t know anything about the ultimate consequences of our actions – so we should keep our focus on helping people in the here and now.”
There’s something to be said for this line of reasoning, and in general, I think it’s worth maintaining a healthy level of skepticism of proposals for Longtermist projects that seem overly vague, or where all of the impact is concentrated in supposed second or third order effects. But overall, while epistemic humility is a valuable trait, it definitely doesn’t lead me to think we should give up on working on things like existential risk reduction.
Some things seem pretty clear – especially when it comes to pandemic prevention
Say you’re someone who only takes an action when you are highly confident that the action will result in a lot of good being produced. Looking out over the Longtermist landscape, you might be feeling a little concerned. As someone who is pretty unfamiliar with how Artificial Intelligence research is being conducted, I’m a little out of my depth saying this, but my outsider’s perception is that for a lot of AI safety work, it’s super unclear whether it’s worth anything.1 People also sometimes talk about trying to reduce “Great Power Conflict,” such as a war between China and the United States. I think it’s even less clear exactly what that would entail, or whether interventions would be likely to be helpful on that front.
But I don’t think that the existence of uncertainty means we should just give up on helping the future. It seems to me that there are pretty clear areas in which we can act.
The best example of such an area, in my view, is pandemic preparedness. The thesis here is clear:
Pandemics have occasionally wreaked havoc on human society (Covid, of course, but also the Black Death, which killed around two in every five Europeans)
We don’t spend much money trying to prevent them
If we spent a lot of money to prevent them along the lines of what biosecurity experts are proposing, we would probably be better prepared
This seems extremely reasonable to me! There are no hard-to-justify assumptions here, or logical leaps.
Further, what the experts are proposing on biosecurity seems really quite good to me. The Apollo Program for Biodefense (may it rest in peace) was a very well thought out proposal, with a ton of concrete, actionable items. I think we should feel confident that if enacted, it would reduce the odds of a devastating pandemic. By how much? I don’t know! But I feel strongly that the effect wouldn’t be zero.
The existence of existential risk from pandemics, and the fact that these risks seem clearly mitigable, ought to take the wind out of the sails of the uncertainty argument against Longtermist action.
Against Uncertainty As a Reason for Inaction
More broadly, though – and I feel strongly about this – it’s worth saying that the mere existence of uncertainty is not a reason for inaction. We’re uncertain about everything – there’s never any way to know for certain whether an intervention will do good or do harm before it happens. Obviously, the more certainty you can have, the better, all else equal, but some uncertainty will always be unavoidable.
And using uncertainty as a justification for inaction can have devastating consequences. We’ve see this with actions of the FDA and CDC over the last two years. There wasn’t “enough evidence” that prioritizing first doses of the vaccine would save lives, so we didn’t do it. There wasn’t “enough evidence” that rapid tests detected infectious people, so the FDA kept them illegal.
If we want to be better at decision making than the FDA and CDC are – and we should, because frankly that is a really low bar – we need to avoid leaning on uncertainty as an excuse not to act. Pointing to concern about “uncertainty" allows us to avoid confronting the reality that our moral obligations may point us in very different directions than our current careers. While this is very convenient, convenience isn’t the same as truth.
Personally, when I first got into Longtermism, I was initially skeptical of working on or devoting too many resources to Artificial Intelligence safety, in large part because the uncertainties seemed so high. But while I still think uncertainty in AI alignment work is high, I’ve come around to the view that this isn’t all that relevant. There just is no binary when it comes to “is this too uncertain to work on?” – I don’t think that even makes sense as an analytical framework.
What you need to do is take action when the expected value (EV) is as positive as possible. Part of your expected value calculation should – obviously – account for the things you’re uncertain about, as best you can. If this leads you to think you should take an action that you’re highly uncertain will do anything productive at all, but might yield huge benefits, then so be it – that’s what you ought to do. You should always stay careful about unintended consequences, and be aware of the cognitive biases that lead you to certain conclusions, but unknowns are part of every decision, and some of the highest EV actions available to you will likely also be some of the most uncertain. Don’t let that deter you from taking them.
And if you’re not convinced of this:, you can always go work on pandemic preparedness.
And Artificial Intelligence capabilities research has real, massive downside risks, which makes passing the “only take an action if you’re sure it’ll do a lot of good” test even harder.
Uncertainty isn’t an excuse not to work on existential risk
Agreed about preparedness. It seems that one of the best things we can do for the future is reduce uncertainty about how to help people generally by learning how to best help people in the here and now. There's lots of stuff we can be trying now rather than speculate about the numbers of people in the future and what might help them
I would say that your argument about uncertainty is at its strongest when applied to (a) specific mitigation strategies that are common to multiple risks (eg get off earth) and (b) strategies that increase future resources for dealing with whatever risk comes along (eg reducing regulation of business to increase productivity growth over the next 100 years) and (c) mitigation strategies that attack problems that turn other risks from disasters to existential risks (eg North Korea attacks South Korea vs North Korea nukes South Korea, causing worldwide nuclear war).