Moral Uncertainty and Why We Should Care About Existential Risk
A post on a niche topic that I think is extremely important
Longtermism is the philosophy that we should care a great deal about the future of humanity, and prioritize addressing risks that could destroy us, like pandemics and climate change, over other concerns (you can read a quick explanation of Longtermism here). If you weren’t persuaded by the emotional case for why you should care a great deal about existential risks, and about the long term future, there’s another line of argument that I want to present. This is the case for Longtermism as a response to “moral uncertainty.” It’s somewhat esoteric, but it’s the argument that I personally find most persuasive.
What is Moral Uncertainty?
When we make decisions under uncertain conditions, it’s usually best to think in terms of expected value, which is the value of an action, based on what benefits and harms it could cause, weighted by how likely each of those benefits and harms is to occur. We do expected value calculations of this sort all the time in our lives, they’re just usually not explicit. The small probability we’ll choke on that hotdog gets outweighed by the large probability we’ll enjoy it; we check the weather and decide the 70% chance of rain means it’s not worth going for a hike after all. We know what we value in our own lives, and while figuring out how to maximize what we value can be empirically difficult, conceptually, it’s pretty easy.
But what do we do if we’re uncertain about what to value? Philosophers have spent thousands of years trying to figure out what “The Good” is, and they haven’t exactly reached a consensus. Some people don’t even believe you can define an abstract “good” at all – moral antirealists reject the concept of objective morality entirely.
So what do we do?
One answer about what to do here might be what the philosopher Will MacCaskill calls “fanaticism:” you pick the moral theory that you have the highest credence in, and then you follow that theory’s guidance in all cases. But this causes some pretty obvious problems. What if:
You have nine moral theories you find plausible
You have the highest credence in theory 1, which you think has a 20% chance of being the correct moral philosophy
You think there’s a each of the eight other theories has a 10% chance of being correct
Faced with a choice between doing Action A and Action B, theory 1 says to do Action A, and all of the other eight theories say to do Action B
According to your own estimates, there’s an 80% chance Action B is the better choice. But fanaticism leads you to pick option A. This seems clearly wrong.
You could modify this fanaticism to say, instead of picking one theory and using it for all your decisions, you only do Action A if you think it’s more likely than not to be better than Action B. That solves the problem above. But what about weights? If the theory you have 20% credence in says doing Action B would be catastrophically bad, whereas the other theories say Action B does only the slightest bit of good, it would seem absurd to pick Action B.
So maybe a better answer is to apply the same logic of expected value that we used above. Here’s MacCaskill, explaining what that would look like:
Supposing we really do want to take moral uncertainty under account, how should we do that? In particular, it seems like given the obvious analogy with decision making under empirical uncertainty, we should do something like expected value reasoning where we look at a probability that we assign to all sorts of different moral views, and then we look at how good or bad would this action be under all of those different moral views. Then, we take the best compromise among them, which seem to be given by the expected value under those different moral views.
This seems pretty good! There are a ton of things left to work out, but applying the logic of expected value to moral uncertainty makes a lot of sense at first blush.1
A Consensus Good Thing
If you’re trying to do the most good, but you’re not sure what “The Good” is, I think it makes the most sense, intuitively, to pick an objective that a wide range of the most plausible moral theories endorse as “good,” and that at least some of those theories think of as “really, really good.”
Reducing the odds of human extinction in the next 100 years seems to me like the best possible candidate for such an objective. In fact, it seems like “we should try to prevent the end of life on earth” is maybe the closest we can get to a universally acceptable moral goal.
If you’re a utilitarian who only cares about the near term: it’s pretty clear that we are underprepared for threats like pandemics, and it would be good to have more people working on biosecurity and nuclear deproliferation.
If you’re a utilitarian who cares about the near term and long term equally: most of the potential utility lies in the future, and we should want to make sure all that future happiness actually happens.
If you’re an old fashioned virtue ethicist, who wants to develop their character traits: it certainly seems to me that reducing existential risk is an honorable and noble goal. Take a look at Aristotle’s list of virtues – I think it’s pretty obvious that working on existential risk reduction checks most of those virtuous boxes.
If you’re a deontologist, who never wants to break a moral rule: good news! There is tons of Longtermist work that you can do that will never involve telling even the smallest lie, or breaking any of the other traditional deontological rules. And when it comes to “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law” – Kant’s Categorical Imperative – well, it definitely seems to me that it would be good if people were universally more concerned about existential risk. Why would a deontologist have anything but praise for someone who picks a career that tries to address the tail risks of climate change by promoting innovation in renewable energy?
Those are the three most prominent ethical theories in Western moral philosophy, but you can keep going down the line here.
Maybe you’re a fan of Tim Scanlon’s “Contractualism,” described in his 1998 book “What We Owe to Each Other.” I think one thing we pretty clearly owe each other is a collective responsibility to promote human flourishing – that can’t happen if an asteroid destroys the planet.
Maybe you’re a pure egoist, who just wants to maximize their own hedonic well being. In the long term, a big part of leading a satisfied life is feeling like you’re doing meaningful work, and that you’re a part of something bigger than yourself. In my personal experience, working to make sure humanity doesn’t go extinct has absolutely given me a sense of purpose, and made my life more satisfying.
Maybe you’re an intuitionist, who just goes with their gut. I’d be really surprised if your intuition doesn’t tell you that human extinction is bad – the vast majority of people definitely have a gut instinct that preserving humanity is a good and important thing.
This is more facetious, but maybe you’re a member of a religious group which believes the world needs to be in a certain specific condition for God to return, or the Messiah to come. Obviously, that can’t happen if nuclear winter kills us all – so you should be concerned about existential risk as well.2
I’m not going to claim that “working to prevent threats like pandemics from wiping us all out” is seen as “good” by literally every single moral theory. There are all sorts of eccentric moral philosophies. Some people, including anti-natalists like David Benatar, think that existence almost always contains more bad than good. They see bringing people into the world as bad, even if those people seem like they’re living happy lives.
But overall, it seems like the vast majority of plausible moral philosophies think that preventing the extinction of conscious life on earth is a very, very good thing to work on. If you want to make sure you’re doing something really good and important with your life, choosing a career that helps reduce existential risks is the best place to start.
Questions of infinite ethics really mess up this meta-ethical expected value calculation, as do questions of incomparability. MacCaskill goes into all of that in his interview, and I’ll do another post about it soon. But for now, we’ll leave it aside.
This example is inspired by Mike Huckabee, whose support for the Israeli government is due to his belief that the End of Days cannot occur without the gathering of the Jews in Israel.