14 Comments
Jul 11, 2022Liked by Simon Bazelon

Did you read The Dawn of Everything? Seems the thesis is that we can re-think how [we] approach all notions*, including political and economic systems and that ideally would lead to a global justice movements. Strongly recommend if you have not.

Expand full comment
Jul 11, 2022Liked by Simon Bazelon

“ Second, I think that the massive problems with infinite ethics perhaps point to this kind of moral philosophy being sort of farcical. Who are we to think we can figure out the Good with just logic, reason, and a bit of math? Maybe what we need is to think less about these issues with our head, and more about them with our guts, or with our hearts. ”

After going through 4 years of a philosophy degree and spending a lot of time reading/lurking in EA and rationalist spheres online, this is where I’ve landed. I think taking a hyper-logical, highly quantitative approach to ethical philosophy is misguided. Ethics is more like an art than a science, and so may never be boiled down to elegant, clear, general principles, or rid of tensions and fuzziness (certainly not any time soon). Quantitative reasoning can be instrumentally useful in some cases of applied ethics, such as when assessing the consequences of a policy, but that’s as far as it goes.

Expand full comment

I've been thinking about these issues for a long time, and it's really cool to see someone else address it and point out some stuff I haven't read about it.

I do think these conversations would benefit from a more clear axiomatization so we know what "infinity" means really. Generally, statements like "(.00001) x (negative infinity)" don't really make sense. This would let us be more clear about what those statements actually mean.

Also, I'd be really curious to know what counts as "conscious life" to you!

PS: There's a short essay in Marina Keegan's book "The Opposite of Loneliness" called "Putting the “Fun” Back in Eschatology" that is relevant to the emotional appeal of longtermism, I think.

Expand full comment
Jul 11, 2022Liked by Simon Bazelon

I'm not sure that infinite ethics is necessarily a problem for total utilitarianism, as the amount of potential value in the universe that we affect is a necessarily finite. The laws of physics ensure there's a maximum amount of minds with a minimum/maximum level of wellbeing at any one time and existence is only sustainable for a finite time. It doesn't guarantee that weird considerations affecting extreme amounts of value can't dominate calculations, but it doesn't make sense to put literal infinities in there.

Expand full comment

Very interesting post, thanks for writing this.

Expand full comment
Jul 11, 2022Liked by Simon Bazelon

These kinds of problems used to arise with some regularity in probability theory. Normally the issues boils down to you asking about probabilities that don’t really make sense.

It’s intuitive that we should be able to assign a probability to any given outcome, but this is essentially always false unless you’re dealing with very few (think finitely many) possible outcomes.

As a baby case, suppose we have some fixed event that could happen, and we want to find out ‘how bad it would be’. Let’s say for the sake of argument that for any moral framework M we get a corresponding value v(M). If we’re only considering a few such M we assign probabilities p(M) and compute a score sum p(M)v(M). If there are infinitely many such M, and there’s no reason there shouldn’t be here, then you can’t assign a non-zero p(M) to each one. In fact you end up doing what is essentially an integral, and here you may very well have infinitely valued v(M) but still get a finite expectation.

I’m practice this seems like a reasonable thing to occur. You could plausibly expect there to be some framework that assigns any given non-zero value to the outcome. The point is that you’re somehow interested in the relative density of such frameworks. The more extreme frameworks should A be rare, and B be limits of less extreme ones.

Expand full comment

Maybe infinite wrongness is impossible because infinite harm is impossible? If you define wrong in terms of some notion of damage, seems like you dodge infinite wrongness for free.

Expand full comment

It sounds to me as if theories that ascribe infinite values are like Fundamentalist denominations of <your least favourite religion> in that they deny the possibility of coexistence with anyone who disagrees with them on these values. Therefore, you are forbidden to think that such a theory has any probability of being right except 0 or 1.

Yes, I know perfectly well this is a rationalisation. (Besides, by whom are you forbidden? Old Immanuel wossname?)

Expand full comment

On a purely theoratical level, what about the following solution:

1) Define a continuous sequence of ethical beliefs that converge to a given fanatical belief (for every fanatical belief);

2) Define a prior with properly defined properties (basically tails going to zero fast enough).

3) Instead of unweighted expected value of ethical beliefs, calculate the expected value weighed by the chosen priors.

This re-introduces subjectivity, but is much better than picking one particular set of beliefs, so I'd argue it's some progress. What is nice about this solution is that while it does eliminate purely absolutist views, it does give a positive weight to views that are 'very' fringe but not infinitely so.

Now, how to apply this ideat practically, is a dfferent matter.

Expand full comment

It sounds like you’re just recapitulating the problem of the Utility Monster. Which is OK, but I think that framing helps us get a more useful perspective on the problem of IE.

The UM theory’s proponents seem quite satisfied that they’ve proven their side right, but it seems to me that it’s just too tidy of a solution to be the final statement on the matter.

For instance, I don’t think there are any real-world cases of UMs that can’t equally be solved by the mere application of some simple common sense. A baby or child can be one hell of a utility monster from their own frame of reference, but we still force bedtimes on them because their frame of reference is objectively inaccurate. Likewise, no UM has ever convinced a welfare state to give them a million dollars on the argument that the UM would “enjoy it more”; instead, we have standard benefits and limits because we recognize that despite inequalities in individual circumstances, no one policy can make up for all of them, and therefore we pick a universal benefit level that maximizes political utility and move on with our lives.

I think this points to a strong argument for going ahead and banning IEs from the discussion of longtermism: it maximizes longterm utility for us to muzzle these ethical Utility Monsters. In fact, like any bad dog, we would be perfectly permitted to *punish* IEs for misbehaving: simply ignore their outrageous claims.

And likewise, this still holds for IEs that try to evade the ban by postulating Large-But-Not-Infinite ethics. They know what they’re trying to get away with - or maybe they don’t! - but either way, we can ignore them and resort to a real-world frame of reference, because the presence of the outrageous claim is evidence that its own frame is broken for the purposes of utility maximization.

Expand full comment