newnumber6: Ghostly being (Default)
newnumber6 ([personal profile] newnumber6) wrote2008-02-29 03:34 pm

A Selfish, Atheistic Argument against Suicide (long)

Just a little thought experiment.

Premise 1: Multiple Universes exist from every possibility. This is like you see in SF shows all the time - for every choice (either personal or right down to particle interactions), the universe branches out into two (or more) possibilities, creating a new timeline for every possible choice.

Premise 2: Identity (for people) is a combination of two one-way operators (was and will be) which are continuous from any moment X to and moment Y in the future for will be, or past for was, where the state of the object in the earlier (or later) moment is directly dependant on the earlier moment. That is, I was me at 5 if the state of me now is a result of a continuous set of changes from me at 5 until now, in my universe. I will be me at +5years if the state of the character 5 years from now in any one universe is a result of a continuous set of changes over 5 years in that particular universe. This allows there to 'will be' multiple mes at +5 years, but they cannot claim to have identity to each other, because although they share a was connection, they don't share a will be .

Premise 3: There is no afterlife of a supernatural nature.

Under premise 2, it is theoretically possible that there is a non-supernatural afterlife (after death, some advanced entity is able to look back in time to the moment of death and register the mental states, and recreate it), but we will not consider it at first to simplify matters.

Premise 4: Life is awareness
That is, you have to have some level of present or future awareness of your condition to be considered alive. If you've been braindead and in a coma with no awareness for 30 years and then die, technically, for the purposes of the argument, you've been dead 30 years. If you're in a coma with no awareness for 30 years, wake up, and die 5 minutes later, you were alive the whole time. What counts as a level of awareness we can leave somewhat vague. It may well include reduce mental states, dreamlike comas, or extreme brain damage that leaves you functional, or it may exclude those after a point. It doesn't strictly matter which one you choose, so long as you're consistent about it. (Where necessary, we will use 'technical survival' or 'technical life' to talk about moments where the body is alive but there is no awareness)

Arguement: Take a suicidal person. Let's call him Gil , after the hard luck character on the Simpsons. Gil. Like all people, Gil is splitting off constantly into parallel universes. However, Gil, even though he may believe in parallel universes, is extremely self-centered, and really, he's only concerned with himself. Himself is his past, present, and possible futures. Once a possible future crosses the line to being an alternate present, it is of absolutely no value to him, because he will never ever experience that alternate world-line. He can take no joy in that there are universes where he is happy, because he can't feel them.

When Gil decides to kill himself, he continues splitting off throughout the decision. Some of these Gils will eventually decide not to go through with it, but they're not important. At some point, some of these Gils will reach something of a Point of No Return. They've committed an act, whether it's jumping off a building or pulling the trigger of a gun or taking pills, which will most likely end in their death. From then on, every moment, every split will go in one of two ways: either it will continue towards death (very likely), or something extremely unlikely will happen that will save Gil (unlikely).

At this point I'd like to break off and discuss what I consider the "No Longshots" rule. While not technically a premise, it's an important rule of thumb IMHO. That is, if something is extremely unlikely to happen to Gil's benefit, it should be discounted from his deliberations. Yes, if he jumps off a building, it's remotely possible that not only will some inconceivably rare event to save him, but also make him rich from the ability to sell his story. Not only is it remotely possible, it's almost certain it'll happen to at least one of him. But Gil should discount this possibility when he's considering crossing the Point of No Return, because the chance of this happening is small compared to the chance that his life in will improve in some random, highly improbable way, without his suicide attempt, and much smaller than his chance of surviving the suicide attempt in some non-pleasant way (say complete paralysis). You can't bet on it, you have to assume it's not going to happen when you're making your decisions. The only time you may consider a longshot is when it is pretty well the only option left (where, more often than not, it screws you, as I'll argue).

To a similar degree, we should assume that we ourselves are not a spectacular longshot. For example, it could theoretically be true that there's a 99% random chance every hour that any given human being will lose a limb, and all of us have been on an incredible lucky streak, but we shouldn't live our life assuming that we'll lose a limb in the next hour, because our experience hasn't prepared us for the idea.

In any event, at some point he will reach moments where there are two alternatives: Gil dies, completely, or Gil continues living a little longer. The probabilities involved do not matter, because Gil cannot experience a world in which he dies. Others can, but Gil cannot, since Death is by definition the cessation of aware experiences. So, in one of these 'death-or-no-death' situations, from Gil's perspective in that moment, he will not die.

This is not a good thing for Gil. Because, he'll survive into the next moment, where another such decision point will exist... with the same results. Once you reach a death-or-not point, Gil will continue to survive for as long as there is any probability of survival, no matter how tiny that probability is, no matter how damaged and in pain his body or brain is. He will be aware on some level, because we have defined awareness as part of life.

So are there moments where there is absolute 0 probability of technical survival Possibly. However, let's say that Gil, is at this last moment before death. If he is having conscious experiences at this moment, yay, good for him, his mission is accomplished, he gets to die. There is no splitting into an alternate universe where he survives (he has alternate brothers who survive, but since they split off earlier, he doesn't care). He's probably in pain, but it's over.

But if he's not having conscious experiences at the moment, he will not experience this moment, he's already dead by our definition. The last moment that he was having conscious experiences is the only Gil who has any stake in what happens next. So, now we have to ask the question, are there any moments where Gil is both conscious on some level, and there is absolute 0 probability of survival for any of the moments that could split off from that moment. This is a harder question. Possibly. But when you accept multiple universe theory, you have to accept that incredibly unlikely things can happen in some universe. Every molecule in a brick wall could spontaneously quantum tunnel through Gil's head and wind up harmlessly on the other side. Entropy could reverse itself. (No Longshots does not apply because now _all_ aware Gils will be going into longshots... it's the only place left). This does not benefit Gil very much, because even if entropy reverses itself, it will only do so until it is more likely for Gil to survive (even if in unbearable pain and limited ability to do anything) than it is for the inconceivable to continue to happen (where some insignificantly tiny portion of Gils would still get to experience this, and under the theory of No Longshots should now be discounted from our deliberations). The longer the gap between awareness and technical death, the more likely there are outs.

So, either Gil is aware at the moment of no possibility of life, or, chances are, he will survive to another aware moment, where he faces the same problem. The key question now becomes is it possible to have awareness at a moment from which there is absolutely 0 possibility of survival in any moment descended from that, and how likely these moments are to occur. I don't have firm reasoning behind this, but my feeling is that they're very rare.

In any event, Gil, upon reaching the Point of No Return, has doomed most of his future selves to the most prolonged death possible by the method he chose, because each of them will continue to survive as long as is possible in their own timelines. For any future Gil, a quick death is a longshot to be discounted. The ones that survive are likely to be seriously injured in some way or another, and even the ones who miraculously escape that are generally not going to be any better off than Gil was before the Point of No Return (except for a purely psychological reaction to having survived).

Any death which involves a pre-death lack of awareness is even more of a losing game, because there are a greater number of ways to escape death. Gil will have trouble killing himself, but is very likely able to give himself a great deal of prolonged torture.

We should pause here to note that, accepting the premises, it would seem that Gil's fate awaits all of us, applying equally well to accidental death as suicide. However, while oblivion might be preferable to a long depressing life, a long, and probably painful, death (with a strong likelyhood of permanent physical incapacitation) is something that is probably best postponed for as long as possible.

Now, let's consider the hi-tech heaven approach. It's remotely possible that somewhere in the future, after say The Coming Singularity, technology will allow people who have the inclination to recreate the minds and bodies of those who are already dead, by looking into the past and seeing how they were at some point. This is also a possibility for Gil after the point of no return, at least in some cases. We must only look at cases where they actually look into the past, or determine his past state by looking at the molecules of body, or something. It may be possible for a sufficiently advanced computer to simulate every mental state possible for every human being, but because that mental state is not dependent on Gil's past (working equally well if Gil never existed), Gil prior to the moment chosen cannot claim this simulation will be him, and so like his alternate brothers, he can't experience it and it does him no good. But if there is a recreation dependent on his mental state, it is another chance to survive suicide (or natural death) and perhaps even thrive from the experience, if he is being created to live in a heavenlike environment. Or is it?

The first thing to consider is how likely such a situation existing is. It may be that it is not actually possible at all, and is just a science fiction fantasy. If that's the case, we can stop the discussion right here. However, if it is at all possible, if startlingly improbable, then it's there's a chance that all living awareness flows to this. It could be that this ability's not only possible, but extremely likely. In which case, chances are nobody will ever truly die without reaching this (since the more likely it is this becomes possible, the more likely it is any individual at any time will be resurrected). There would be no moments where death is the only alternative, because there's also the alternative of far future resurrection.

But if it's extremely unlikely but still possible, it will happen in some large number of universes. But with a much smaller pool of universes interested in this, we have to another thing to consider. _When_ do they choose to (or are able to) make a copy of you. If it's before the point of no return (maybe they don't want people who've decided to commit suicide), we don't care, because we're discussing people who make the choice. If it's after the point of no return but before the death-or-no-death moments begin (say, on the fall down from jumping off a building), it's still far more likely that these types of universes don't exist than that they do and will rescue - at absolute best you're only at a 50% chance for resurrection anyway (since at any moment they choose you, you split into two different people within the same universe. One the future you, the other the one that continues to die). No Longshot applies. If they choose to (and are able to) take people at the last moment of awareness, you're good, to a point, though at the last moment of awareness it's possible (depending on your definition of awareness) your mind is so damaged that awareness is all that's left of you - no memory, a personality radically altered, etc. Presumably they could recreate your body, and likewise recreate your mind, but if they're going to go to all the trouble of recreating your mind, it might just be easier to get a screen-grab of your mind from an earlier, pre-damaged point. And any pre-damaged point, in this scenario, the probability of you being split into resurrection you is far less than the probability of you continuing to die. The No Longshots rules screw you again. Even the possibility of a Post-human civilization resurrecting you is only help if it's relatively likely to occur, and to be so concerned about your own sensation of continuity to resurrect you from the very moment of death. Considering the resurrected Gil they get probably doesn't care when he's been split off from, it doesn't seem very practical to grab him from a last possible moment.

All in all, Gil's better off living than attempting suicide, unless we can be certain post-Humans of a specific type are likely to exist.

As an aside, given these premises it would be remotely possible to construct a Shroedinger's Cat style experiment to ensure you wind up in a future of your choice. Take a lottery win, for example. Buy a ticket, and then somehow arrange a room such that it kills you instantly if you don't win. This is best done also employing a sedative that completely knocks out your awareness. As long as your desired outcome is much more probable than the system failing somehow, you should expect to wind up in a universe where your lottery numbers were pulled. Any other universes, you won't have any conscious perceptions at all, and so can't properly perceive. Of course, all of this is not considering the effect your very probable death would have on others, which is not insignificant, but remember Gil is self-centered, so it would be a good deal for him.

Maybe not such a good deal for the rest of us.

[identity profile] kissingdaylight.livejournal.com 2008-03-01 07:47 am (UTC)(link)
I think when it comes to the issue of suicide that there are other questions in the biological and philosophical realms that must also be addressed. Mainly, if depression can be treated, is suicide ever a viable option since the feelings are only a temporary state? Does anyone ever really, truly want to die? (All those who have survived jumping off of bridges have reported that after they jumped, but before they hit the water, that they realized they didn't really want to die.) Therefore, is suicide always an attempted cry for help? Does a person have the ethical obligation to not kill him/herself because of the potential good they could do for society had s/he continued to live?

I don't really have answers to these questions, but they are more the way that I approach the issue rather than from a physics-type perspective.

(I didn't review this for coherency, so I hope it makes some kind of sense.)

[identity profile] newnumber6.livejournal.com 2008-03-01 02:28 pm (UTC)(link)
It does make perfect sense, although what I was trying to do was specifically to build an argument that ignores all that, ignores ethics, to sort of reduce it to the fundamental moment of choice and the actual practical consequences of it (once again, if you accept the premises, which I do, for the most part - okay, I'm iffy on multiple universes). Since to me the more other stuff you introduce, the less easy it is to generalize.

I believe the human mind is a very very fickle thing, prone to various biological mechanisms that confound even that what we think of as 'self'. Hell, your mind can actually convince itself that it does not exist, and answer questions convincingly on how it does not exist. That makes no sense whatsoever, but your mind can become convinced of it. Your mind can convince itself that it can't see something, even though it might react when that thing moves. Many studies have proved that people are _horrible_ at predicting their own reactions, and some people thing that consciousness is all essentially a lie, that we make up explanations after the fact for our actions, which we really don't understand at all. I'm of the belief that consciousness and choice is a much more fragile and messed up thing than we take for granted, generally. So where do we make our distinctions? For example, the "everyone who survives jumping off a bridge later claimed they realized after falling that they didn't want to die". Fair enough. But aren't there people who have had multiple determined attempts to kill themselves? Therefore it would stand to reason either they were lying about still wanting to live, or they changed their mind again. Or, perhaps, it's a natural biological reaction to impending actual death that the rush of fear and adrenaline forces your mind to flips a switch to make you decide "OMG I don't want to die" and scrabble to survive. But is _that_ reaction any more _real_ then say a chemical imbalance that causes you to, temporarily, think that you _do_ want to die? I don't know. What is real? If we're all just a combination of our biology, then a chemical imbalance in the brain is just as much a real reflection of your feelings as the desire not to. Sure, you could go into things about 'the proper functioning of the brain', but that's a human-imposed perspective on the issue, since fundamentally we're all atoms who behave as they will. A person who is looking for an excuse to kill themselves might likewise feel free to ignore that human imposed view, and I'm not sure that, aside from my own personal desire for them not to die, that I have a reason to dispute it.

That's all a much deeper philosophical question than I was prepared to get into, which was part of the reasons I limited it from the argument. ;)