Since I’m going to spend a lot of time talking about Longtermism on this blog, I want to explain what I mean by the term. This isn’t super original – other people have written about Longtermism before. But I don’t love the explainers that are currently out there, so I’m going to take my own crack at it. Feel free to skip this if you’re already familiar, as I’m sure a lot of you are.
Roughly speaking, Longtermism centers on the following three ideas:
Consequentialist Ethics. Consequentialism is the moral philosophy that says the morality of an action is determined by its consequences, rather than by whether that act breaks some moral “rule.” You don’t have to be a consequentialist to care about the longterm future, but in practice, the vast majority of Longtermists are consequentialists.
Temporal impartiality. Longtermists believe that just as our moral obligation to help another person shouldn’t depend on their race, their gender, or where they exist on the globe, this obligation shouldn’t depend on when someone exists in time. If, for example, you are given a choice between causing some amount of suffering to a person currently alive, vs causing that amount of suffering to a person who will exist in ten years, Longtermists think you should be indifferent to this decision.
The hinge of history. Longtermists believe that humanity has enormous potential. Economic growth could bring us to a future without scarcity. Advances in technology could, in the coming centuries, make humanity a species that lives on more than one planet, and can thrive for thousands of years to come. But the next century presents enormous challenges. If we aren’t careful, our collective human potential may never be realized. Nuclear weapons, climate change, pandemics, and even the growth of artificial intelligence could all cause a societal collapse in the coming decades, and could maybe even result in human extinction. Thus, the next few decades represent a “hinge” – a time where the future is in flux. Utopia, total disaster, and every outcome in-between are all in play. This magnifies the significance of the choices we make now.
These premises have significant policy implications. Particularly, they suggest that society should be devoting far more resources toward reducing so-called “existential risks” – threats that could cut off humanity’s incredible potential forever. Further, they imply that individuals have strong moral obligations to use the resources at their disposal, and their careers, to protect civilization from these same threats.
Prominent Longtermists include people like the philosophers Toby Ord and Will MacCaskill, as well as Sam Bankman-Fried, the CEO of FTX. Longtermists generally identify as “Effective Altruists” – a similar intellectual movement that overlaps heavily, and has over time come to be more and more associated with Longtermism. (I’ll write more about effective altruism (EA) in future posts.)
If you’re interested in reading more about Longtermism, I strongly recommend Max Roser’s recent piece on the subject for Our World In Data. For a comprehensive introduction to the topic, I recommend The Precipice: Existential Risk and the Future of Humanity, by Toby Ord. It’s probably the most important book I’ve ever read – if you want to read it, send me an email at outoftheordinary@gmail.com, and I’ll make sure you get a copy.
Over the next few days and weeks, I’m going to try to make the case for Longtermism, from a few vantage points. First, I’ll argue for the appeal of Longtermism from a perspective of pure intuition. Second, I’ll make the case for it as empirically appealing for those with consequentialist ethics. And finally, I’ll argue for it as the most attractive philosophy under conditions of moral uncertainty, where we aren’t sure what the proper moral theory is.
For now, I’ll leave you with a (tweet-length) summary:
“Longtermism prioritizes ensuring, above all else, humanity’s longterm future is prosperous. Longtermists try to identify the most effective ways to prevent pandemics and nuclear war from causing civilization-wide catastrophe, and often choose careers to fight those threats.”
Seems pretty good to me…