![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Just a little thought experiment.
Premise 1: Multiple Universes exist from every possibility. This is like you see in SF shows all the time - for every choice (either personal or right down to particle interactions), the universe branches out into two (or more) possibilities, creating a new timeline for every possible choice.
Premise 2: Identity (for people) is a combination of two one-way operators (was and will be) which are continuous from any moment X to and moment Y in the future for will be, or past for was, where the state of the object in the earlier (or later) moment is directly dependant on the earlier moment. That is, I was me at 5 if the state of me now is a result of a continuous set of changes from me at 5 until now, in my universe. I will be me at +5years if the state of the character 5 years from now in any one universe is a result of a continuous set of changes over 5 years in that particular universe. This allows there to 'will be' multiple mes at +5 years, but they cannot claim to have identity to each other, because although they share a was connection, they don't share a will be .
Premise 3: There is no afterlife of a supernatural nature.
Under premise 2, it is theoretically possible that there is a non-supernatural afterlife (after death, some advanced entity is able to look back in time to the moment of death and register the mental states, and recreate it), but we will not consider it at first to simplify matters.
Premise 4: Life is awareness
That is, you have to have some level of present or future awareness of your condition to be considered alive. If you've been braindead and in a coma with no awareness for 30 years and then die, technically, for the purposes of the argument, you've been dead 30 years. If you're in a coma with no awareness for 30 years, wake up, and die 5 minutes later, you were alive the whole time. What counts as a level of awareness we can leave somewhat vague. It may well include reduce mental states, dreamlike comas, or extreme brain damage that leaves you functional, or it may exclude those after a point. It doesn't strictly matter which one you choose, so long as you're consistent about it. (Where necessary, we will use 'technical survival' or 'technical life' to talk about moments where the body is alive but there is no awareness)
Arguement: Take a suicidal person. Let's call him Gil , after the hard luck character on the Simpsons. Gil. Like all people, Gil is splitting off constantly into parallel universes. However, Gil, even though he may believe in parallel universes, is extremely self-centered, and really, he's only concerned with himself. Himself is his past, present, and possible futures. Once a possible future crosses the line to being an alternate present, it is of absolutely no value to him, because he will never ever experience that alternate world-line. He can take no joy in that there are universes where he is happy, because he can't feel them.
When Gil decides to kill himself, he continues splitting off throughout the decision. Some of these Gils will eventually decide not to go through with it, but they're not important. At some point, some of these Gils will reach something of a Point of No Return. They've committed an act, whether it's jumping off a building or pulling the trigger of a gun or taking pills, which will most likely end in their death. From then on, every moment, every split will go in one of two ways: either it will continue towards death (very likely), or something extremely unlikely will happen that will save Gil (unlikely).
At this point I'd like to break off and discuss what I consider the "No Longshots" rule. While not technically a premise, it's an important rule of thumb IMHO. That is, if something is extremely unlikely to happen to Gil's benefit, it should be discounted from his deliberations. Yes, if he jumps off a building, it's remotely possible that not only will some inconceivably rare event to save him, but also make him rich from the ability to sell his story. Not only is it remotely possible, it's almost certain it'll happen to at least one of him. But Gil should discount this possibility when he's considering crossing the Point of No Return, because the chance of this happening is small compared to the chance that his life in will improve in some random, highly improbable way, without his suicide attempt, and much smaller than his chance of surviving the suicide attempt in some non-pleasant way (say complete paralysis). You can't bet on it, you have to assume it's not going to happen when you're making your decisions. The only time you may consider a longshot is when it is pretty well the only option left (where, more often than not, it screws you, as I'll argue).
To a similar degree, we should assume that we ourselves are not a spectacular longshot. For example, it could theoretically be true that there's a 99% random chance every hour that any given human being will lose a limb, and all of us have been on an incredible lucky streak, but we shouldn't live our life assuming that we'll lose a limb in the next hour, because our experience hasn't prepared us for the idea.
In any event, at some point he will reach moments where there are two alternatives: Gil dies, completely, or Gil continues living a little longer. The probabilities involved do not matter, because Gil cannot experience a world in which he dies. Others can, but Gil cannot, since Death is by definition the cessation of aware experiences. So, in one of these 'death-or-no-death' situations, from Gil's perspective in that moment, he will not die.
This is not a good thing for Gil. Because, he'll survive into the next moment, where another such decision point will exist... with the same results. Once you reach a death-or-not point, Gil will continue to survive for as long as there is any probability of survival, no matter how tiny that probability is, no matter how damaged and in pain his body or brain is. He will be aware on some level, because we have defined awareness as part of life.
So are there moments where there is absolute 0 probability of technical survival Possibly. However, let's say that Gil, is at this last moment before death. If he is having conscious experiences at this moment, yay, good for him, his mission is accomplished, he gets to die. There is no splitting into an alternate universe where he survives (he has alternate brothers who survive, but since they split off earlier, he doesn't care). He's probably in pain, but it's over.
But if he's not having conscious experiences at the moment, he will not experience this moment, he's already dead by our definition. The last moment that he was having conscious experiences is the only Gil who has any stake in what happens next. So, now we have to ask the question, are there any moments where Gil is both conscious on some level, and there is absolute 0 probability of survival for any of the moments that could split off from that moment. This is a harder question. Possibly. But when you accept multiple universe theory, you have to accept that incredibly unlikely things can happen in some universe. Every molecule in a brick wall could spontaneously quantum tunnel through Gil's head and wind up harmlessly on the other side. Entropy could reverse itself. (No Longshots does not apply because now _all_ aware Gils will be going into longshots... it's the only place left). This does not benefit Gil very much, because even if entropy reverses itself, it will only do so until it is more likely for Gil to survive (even if in unbearable pain and limited ability to do anything) than it is for the inconceivable to continue to happen (where some insignificantly tiny portion of Gils would still get to experience this, and under the theory of No Longshots should now be discounted from our deliberations). The longer the gap between awareness and technical death, the more likely there are outs.
So, either Gil is aware at the moment of no possibility of life, or, chances are, he will survive to another aware moment, where he faces the same problem. The key question now becomes is it possible to have awareness at a moment from which there is absolutely 0 possibility of survival in any moment descended from that, and how likely these moments are to occur. I don't have firm reasoning behind this, but my feeling is that they're very rare.
In any event, Gil, upon reaching the Point of No Return, has doomed most of his future selves to the most prolonged death possible by the method he chose, because each of them will continue to survive as long as is possible in their own timelines. For any future Gil, a quick death is a longshot to be discounted. The ones that survive are likely to be seriously injured in some way or another, and even the ones who miraculously escape that are generally not going to be any better off than Gil was before the Point of No Return (except for a purely psychological reaction to having survived).
Any death which involves a pre-death lack of awareness is even more of a losing game, because there are a greater number of ways to escape death. Gil will have trouble killing himself, but is very likely able to give himself a great deal of prolonged torture.
We should pause here to note that, accepting the premises, it would seem that Gil's fate awaits all of us, applying equally well to accidental death as suicide. However, while oblivion might be preferable to a long depressing life, a long, and probably painful, death (with a strong likelyhood of permanent physical incapacitation) is something that is probably best postponed for as long as possible.
Now, let's consider the hi-tech heaven approach. It's remotely possible that somewhere in the future, after say The Coming Singularity, technology will allow people who have the inclination to recreate the minds and bodies of those who are already dead, by looking into the past and seeing how they were at some point. This is also a possibility for Gil after the point of no return, at least in some cases. We must only look at cases where they actually look into the past, or determine his past state by looking at the molecules of body, or something. It may be possible for a sufficiently advanced computer to simulate every mental state possible for every human being, but because that mental state is not dependent on Gil's past (working equally well if Gil never existed), Gil prior to the moment chosen cannot claim this simulation will be him, and so like his alternate brothers, he can't experience it and it does him no good. But if there is a recreation dependent on his mental state, it is another chance to survive suicide (or natural death) and perhaps even thrive from the experience, if he is being created to live in a heavenlike environment. Or is it?
The first thing to consider is how likely such a situation existing is. It may be that it is not actually possible at all, and is just a science fiction fantasy. If that's the case, we can stop the discussion right here. However, if it is at all possible, if startlingly improbable, then it's there's a chance that all living awareness flows to this. It could be that this ability's not only possible, but extremely likely. In which case, chances are nobody will ever truly die without reaching this (since the more likely it is this becomes possible, the more likely it is any individual at any time will be resurrected). There would be no moments where death is the only alternative, because there's also the alternative of far future resurrection.
But if it's extremely unlikely but still possible, it will happen in some large number of universes. But with a much smaller pool of universes interested in this, we have to another thing to consider. _When_ do they choose to (or are able to) make a copy of you. If it's before the point of no return (maybe they don't want people who've decided to commit suicide), we don't care, because we're discussing people who make the choice. If it's after the point of no return but before the death-or-no-death moments begin (say, on the fall down from jumping off a building), it's still far more likely that these types of universes don't exist than that they do and will rescue - at absolute best you're only at a 50% chance for resurrection anyway (since at any moment they choose you, you split into two different people within the same universe. One the future you, the other the one that continues to die). No Longshot applies. If they choose to (and are able to) take people at the last moment of awareness, you're good, to a point, though at the last moment of awareness it's possible (depending on your definition of awareness) your mind is so damaged that awareness is all that's left of you - no memory, a personality radically altered, etc. Presumably they could recreate your body, and likewise recreate your mind, but if they're going to go to all the trouble of recreating your mind, it might just be easier to get a screen-grab of your mind from an earlier, pre-damaged point. And any pre-damaged point, in this scenario, the probability of you being split into resurrection you is far less than the probability of you continuing to die. The No Longshots rules screw you again. Even the possibility of a Post-human civilization resurrecting you is only help if it's relatively likely to occur, and to be so concerned about your own sensation of continuity to resurrect you from the very moment of death. Considering the resurrected Gil they get probably doesn't care when he's been split off from, it doesn't seem very practical to grab him from a last possible moment.
All in all, Gil's better off living than attempting suicide, unless we can be certain post-Humans of a specific type are likely to exist.
As an aside, given these premises it would be remotely possible to construct a Shroedinger's Cat style experiment to ensure you wind up in a future of your choice. Take a lottery win, for example. Buy a ticket, and then somehow arrange a room such that it kills you instantly if you don't win. This is best done also employing a sedative that completely knocks out your awareness. As long as your desired outcome is much more probable than the system failing somehow, you should expect to wind up in a universe where your lottery numbers were pulled. Any other universes, you won't have any conscious perceptions at all, and so can't properly perceive. Of course, all of this is not considering the effect your very probable death would have on others, which is not insignificant, but remember Gil is self-centered, so it would be a good deal for him.
Maybe not such a good deal for the rest of us.
Premise 1: Multiple Universes exist from every possibility. This is like you see in SF shows all the time - for every choice (either personal or right down to particle interactions), the universe branches out into two (or more) possibilities, creating a new timeline for every possible choice.
Premise 2: Identity (for people) is a combination of two one-way operators (was and will be) which are continuous from any moment X to and moment Y in the future for will be, or past for was, where the state of the object in the earlier (or later) moment is directly dependant on the earlier moment. That is, I was me at 5 if the state of me now is a result of a continuous set of changes from me at 5 until now, in my universe. I will be me at +5years if the state of the character 5 years from now in any one universe is a result of a continuous set of changes over 5 years in that particular universe. This allows there to 'will be' multiple mes at +5 years, but they cannot claim to have identity to each other, because although they share a was connection, they don't share a will be .
Premise 3: There is no afterlife of a supernatural nature.
Under premise 2, it is theoretically possible that there is a non-supernatural afterlife (after death, some advanced entity is able to look back in time to the moment of death and register the mental states, and recreate it), but we will not consider it at first to simplify matters.
Premise 4: Life is awareness
That is, you have to have some level of present or future awareness of your condition to be considered alive. If you've been braindead and in a coma with no awareness for 30 years and then die, technically, for the purposes of the argument, you've been dead 30 years. If you're in a coma with no awareness for 30 years, wake up, and die 5 minutes later, you were alive the whole time. What counts as a level of awareness we can leave somewhat vague. It may well include reduce mental states, dreamlike comas, or extreme brain damage that leaves you functional, or it may exclude those after a point. It doesn't strictly matter which one you choose, so long as you're consistent about it. (Where necessary, we will use 'technical survival' or 'technical life' to talk about moments where the body is alive but there is no awareness)
Arguement: Take a suicidal person. Let's call him Gil , after the hard luck character on the Simpsons. Gil. Like all people, Gil is splitting off constantly into parallel universes. However, Gil, even though he may believe in parallel universes, is extremely self-centered, and really, he's only concerned with himself. Himself is his past, present, and possible futures. Once a possible future crosses the line to being an alternate present, it is of absolutely no value to him, because he will never ever experience that alternate world-line. He can take no joy in that there are universes where he is happy, because he can't feel them.
When Gil decides to kill himself, he continues splitting off throughout the decision. Some of these Gils will eventually decide not to go through with it, but they're not important. At some point, some of these Gils will reach something of a Point of No Return. They've committed an act, whether it's jumping off a building or pulling the trigger of a gun or taking pills, which will most likely end in their death. From then on, every moment, every split will go in one of two ways: either it will continue towards death (very likely), or something extremely unlikely will happen that will save Gil (unlikely).
At this point I'd like to break off and discuss what I consider the "No Longshots" rule. While not technically a premise, it's an important rule of thumb IMHO. That is, if something is extremely unlikely to happen to Gil's benefit, it should be discounted from his deliberations. Yes, if he jumps off a building, it's remotely possible that not only will some inconceivably rare event to save him, but also make him rich from the ability to sell his story. Not only is it remotely possible, it's almost certain it'll happen to at least one of him. But Gil should discount this possibility when he's considering crossing the Point of No Return, because the chance of this happening is small compared to the chance that his life in will improve in some random, highly improbable way, without his suicide attempt, and much smaller than his chance of surviving the suicide attempt in some non-pleasant way (say complete paralysis). You can't bet on it, you have to assume it's not going to happen when you're making your decisions. The only time you may consider a longshot is when it is pretty well the only option left (where, more often than not, it screws you, as I'll argue).
To a similar degree, we should assume that we ourselves are not a spectacular longshot. For example, it could theoretically be true that there's a 99% random chance every hour that any given human being will lose a limb, and all of us have been on an incredible lucky streak, but we shouldn't live our life assuming that we'll lose a limb in the next hour, because our experience hasn't prepared us for the idea.
In any event, at some point he will reach moments where there are two alternatives: Gil dies, completely, or Gil continues living a little longer. The probabilities involved do not matter, because Gil cannot experience a world in which he dies. Others can, but Gil cannot, since Death is by definition the cessation of aware experiences. So, in one of these 'death-or-no-death' situations, from Gil's perspective in that moment, he will not die.
This is not a good thing for Gil. Because, he'll survive into the next moment, where another such decision point will exist... with the same results. Once you reach a death-or-not point, Gil will continue to survive for as long as there is any probability of survival, no matter how tiny that probability is, no matter how damaged and in pain his body or brain is. He will be aware on some level, because we have defined awareness as part of life.
So are there moments where there is absolute 0 probability of technical survival Possibly. However, let's say that Gil, is at this last moment before death. If he is having conscious experiences at this moment, yay, good for him, his mission is accomplished, he gets to die. There is no splitting into an alternate universe where he survives (he has alternate brothers who survive, but since they split off earlier, he doesn't care). He's probably in pain, but it's over.
But if he's not having conscious experiences at the moment, he will not experience this moment, he's already dead by our definition. The last moment that he was having conscious experiences is the only Gil who has any stake in what happens next. So, now we have to ask the question, are there any moments where Gil is both conscious on some level, and there is absolute 0 probability of survival for any of the moments that could split off from that moment. This is a harder question. Possibly. But when you accept multiple universe theory, you have to accept that incredibly unlikely things can happen in some universe. Every molecule in a brick wall could spontaneously quantum tunnel through Gil's head and wind up harmlessly on the other side. Entropy could reverse itself. (No Longshots does not apply because now _all_ aware Gils will be going into longshots... it's the only place left). This does not benefit Gil very much, because even if entropy reverses itself, it will only do so until it is more likely for Gil to survive (even if in unbearable pain and limited ability to do anything) than it is for the inconceivable to continue to happen (where some insignificantly tiny portion of Gils would still get to experience this, and under the theory of No Longshots should now be discounted from our deliberations). The longer the gap between awareness and technical death, the more likely there are outs.
So, either Gil is aware at the moment of no possibility of life, or, chances are, he will survive to another aware moment, where he faces the same problem. The key question now becomes is it possible to have awareness at a moment from which there is absolutely 0 possibility of survival in any moment descended from that, and how likely these moments are to occur. I don't have firm reasoning behind this, but my feeling is that they're very rare.
In any event, Gil, upon reaching the Point of No Return, has doomed most of his future selves to the most prolonged death possible by the method he chose, because each of them will continue to survive as long as is possible in their own timelines. For any future Gil, a quick death is a longshot to be discounted. The ones that survive are likely to be seriously injured in some way or another, and even the ones who miraculously escape that are generally not going to be any better off than Gil was before the Point of No Return (except for a purely psychological reaction to having survived).
Any death which involves a pre-death lack of awareness is even more of a losing game, because there are a greater number of ways to escape death. Gil will have trouble killing himself, but is very likely able to give himself a great deal of prolonged torture.
We should pause here to note that, accepting the premises, it would seem that Gil's fate awaits all of us, applying equally well to accidental death as suicide. However, while oblivion might be preferable to a long depressing life, a long, and probably painful, death (with a strong likelyhood of permanent physical incapacitation) is something that is probably best postponed for as long as possible.
Now, let's consider the hi-tech heaven approach. It's remotely possible that somewhere in the future, after say The Coming Singularity, technology will allow people who have the inclination to recreate the minds and bodies of those who are already dead, by looking into the past and seeing how they were at some point. This is also a possibility for Gil after the point of no return, at least in some cases. We must only look at cases where they actually look into the past, or determine his past state by looking at the molecules of body, or something. It may be possible for a sufficiently advanced computer to simulate every mental state possible for every human being, but because that mental state is not dependent on Gil's past (working equally well if Gil never existed), Gil prior to the moment chosen cannot claim this simulation will be him, and so like his alternate brothers, he can't experience it and it does him no good. But if there is a recreation dependent on his mental state, it is another chance to survive suicide (or natural death) and perhaps even thrive from the experience, if he is being created to live in a heavenlike environment. Or is it?
The first thing to consider is how likely such a situation existing is. It may be that it is not actually possible at all, and is just a science fiction fantasy. If that's the case, we can stop the discussion right here. However, if it is at all possible, if startlingly improbable, then it's there's a chance that all living awareness flows to this. It could be that this ability's not only possible, but extremely likely. In which case, chances are nobody will ever truly die without reaching this (since the more likely it is this becomes possible, the more likely it is any individual at any time will be resurrected). There would be no moments where death is the only alternative, because there's also the alternative of far future resurrection.
But if it's extremely unlikely but still possible, it will happen in some large number of universes. But with a much smaller pool of universes interested in this, we have to another thing to consider. _When_ do they choose to (or are able to) make a copy of you. If it's before the point of no return (maybe they don't want people who've decided to commit suicide), we don't care, because we're discussing people who make the choice. If it's after the point of no return but before the death-or-no-death moments begin (say, on the fall down from jumping off a building), it's still far more likely that these types of universes don't exist than that they do and will rescue - at absolute best you're only at a 50% chance for resurrection anyway (since at any moment they choose you, you split into two different people within the same universe. One the future you, the other the one that continues to die). No Longshot applies. If they choose to (and are able to) take people at the last moment of awareness, you're good, to a point, though at the last moment of awareness it's possible (depending on your definition of awareness) your mind is so damaged that awareness is all that's left of you - no memory, a personality radically altered, etc. Presumably they could recreate your body, and likewise recreate your mind, but if they're going to go to all the trouble of recreating your mind, it might just be easier to get a screen-grab of your mind from an earlier, pre-damaged point. And any pre-damaged point, in this scenario, the probability of you being split into resurrection you is far less than the probability of you continuing to die. The No Longshots rules screw you again. Even the possibility of a Post-human civilization resurrecting you is only help if it's relatively likely to occur, and to be so concerned about your own sensation of continuity to resurrect you from the very moment of death. Considering the resurrected Gil they get probably doesn't care when he's been split off from, it doesn't seem very practical to grab him from a last possible moment.
All in all, Gil's better off living than attempting suicide, unless we can be certain post-Humans of a specific type are likely to exist.
As an aside, given these premises it would be remotely possible to construct a Shroedinger's Cat style experiment to ensure you wind up in a future of your choice. Take a lottery win, for example. Buy a ticket, and then somehow arrange a room such that it kills you instantly if you don't win. This is best done also employing a sedative that completely knocks out your awareness. As long as your desired outcome is much more probable than the system failing somehow, you should expect to wind up in a universe where your lottery numbers were pulled. Any other universes, you won't have any conscious perceptions at all, and so can't properly perceive. Of course, all of this is not considering the effect your very probable death would have on others, which is not insignificant, but remember Gil is self-centered, so it would be a good deal for him.
Maybe not such a good deal for the rest of us.
no subject
Date: 2008-03-01 07:46 am (UTC)First of all, do you think that suicide is an inherently selfish action? Some people say it's the most selfish and hateful thing a person can do aside from killing someone else. Others say it's not selfish at all, because the person has a right to relieve their own suffering and those that actually love the person would understand this and be happy that their loved one is no longer in pain.
The thing I don't understand (most) about your concept is, why does Gil have any care for the Gil's that split off from him? Science cannot prove that such alternate universes exist as of yet, and even if they could, the Gil in our universe/plane of perception would only be able to sense what happens to him and not the alternate versions of him. Why should he care for these alternatives any more than he cares for the loved ones he might be leaving? As soon as they split off from him, wouldn't you think of them as separate people since they have separate destinies? Is it Gil's duty to insure the safety of all his alternatives? I mean, theoretically any choice Gil makes that creates an alternate could lead that alternate to an immediate or eventual accidental death.
If it was possible to make these "copies" wouldn't they want to make them before the person got sick so the likelihood of them committing suicide would be even less? (You do believe clinical depression is caused by a chemical imbalance in the brain, correct? No one who is just sad wants to commit suicide.) And if a civilization had the abilities that you describe, wouldn't they be advanced enough to just fix the chemical imbalance when Gil becomes sick so that he never reaches the point of wanting to commit suicide?
Have you ever experienced clinical depression? To the extent you planned your own suicide? (Sorry if this is too personal a question, but I've found that when you read an essay/story/opinion that relates to deep depression I can usually tell who has actually experienced these feelings and who is just postulating what they think it is like from what they have seen/heard. I was not able to do this before I became mentally ill. I thought I knew what the experience was like since I had dealt with what I thought to be deep sadness and had even pondered suicide. Clinical depression is so different - I can't even begin to describe it.) Sorry, serious tangent, but my guess is from reading this that you have not experienced clinical depression. This certainly colors your opinions on the issue (and to be honest, I'm also curious to see if I'm right).
However, while oblivion might be preferable to a long depressing life, a long, and probably painful, death (with a strong likelihood of permanent physical incapacitation) is something that is probably best postponed for as long as possible.
This is what made me question if you had experienced depression. I'm not sure this is true. Even a long, painful death I think would be preferable to most depressives when they are suicidal. My mother has brought up to me the possibility of me physically harming myself beyond repair during my suicide attempts saying things about harming major organs and being crippled the rest of my life. My response to her was if that happened, I would just try to kill myself again until I was successful or I was most likely such a vegetable that my brain only carried on rudimentary functioning and feelings weren't much of a problem anymore.
Of course, no one who is suicidal wants to experience a long, painful death, but in one of my online support-type groups a girl described how she was in so much emotional and mental pain that she curled up beneath a bush and when she was finally able to go to sleep, she expected never to wake up because she didn't see how someone could be in that much pain and still survive. To many, depression is already a long, painful death full of all sorts of incapacitation, even physical.
damn too long ... continued...
no subject
Date: 2008-03-01 07:47 am (UTC)I don't really have answers to these questions, but they are more the way that I approach the issue rather than from a physics-type perspective.
(I didn't review this for coherency, so I hope it makes some kind of sense.)
no subject
Date: 2008-03-01 02:28 pm (UTC)I believe the human mind is a very very fickle thing, prone to various biological mechanisms that confound even that what we think of as 'self'. Hell, your mind can actually convince itself that it does not exist, and answer questions convincingly on how it does not exist. That makes no sense whatsoever, but your mind can become convinced of it. Your mind can convince itself that it can't see something, even though it might react when that thing moves. Many studies have proved that people are _horrible_ at predicting their own reactions, and some people thing that consciousness is all essentially a lie, that we make up explanations after the fact for our actions, which we really don't understand at all. I'm of the belief that consciousness and choice is a much more fragile and messed up thing than we take for granted, generally. So where do we make our distinctions? For example, the "everyone who survives jumping off a bridge later claimed they realized after falling that they didn't want to die". Fair enough. But aren't there people who have had multiple determined attempts to kill themselves? Therefore it would stand to reason either they were lying about still wanting to live, or they changed their mind again. Or, perhaps, it's a natural biological reaction to impending actual death that the rush of fear and adrenaline forces your mind to flips a switch to make you decide "OMG I don't want to die" and scrabble to survive. But is _that_ reaction any more _real_ then say a chemical imbalance that causes you to, temporarily, think that you _do_ want to die? I don't know. What is real? If we're all just a combination of our biology, then a chemical imbalance in the brain is just as much a real reflection of your feelings as the desire not to. Sure, you could go into things about 'the proper functioning of the brain', but that's a human-imposed perspective on the issue, since fundamentally we're all atoms who behave as they will. A person who is looking for an excuse to kill themselves might likewise feel free to ignore that human imposed view, and I'm not sure that, aside from my own personal desire for them not to die, that I have a reason to dispute it.
That's all a much deeper philosophical question than I was prepared to get into, which was part of the reasons I limited it from the argument. ;)
no subject
Date: 2008-03-01 01:54 pm (UTC)Well, to me, one of the best things about philosophy was that you can use things from sci-fi to explore various concepts, and you can do the reverse at as well. To me, the only 'sci-fi' aspect is the multiple universes argument, which is AFAIK a valid (if presently a little fringe, due to the lack of testable predictions) scientific theory, and so fair game, and the aspect of the Singularity (which was only added to plug a potential hole my SF soul saw in it).
First of all, do you think that suicide is an inherently selfish action? Some people say it's the most selfish and hateful thing a person can do aside from killing someone else. Others say it's not selfish at all, because the person has a right to relieve their own suffering and those that actually love the person would understand this and be happy that their loved one is no longer in pain.
No, I don't think it's inherently selfish... or, if it is, it is in a way that is okay. It is a person's own life, and it is fundamentally their own choice. In certain situations it may be more, or less, selfish, just as allowing it may be more, or less, selfish, but just as a baseline, without any context, it doesn't enter into it. However, I do believe that most of the good reasons _against_ it (except in certain transient conditions), are self_less_ ones. To be frank, a not insignificant portion of the time that's the only reason I'm still alive, because I decided that I don't want to cause certain close family members pain. However, if they were gone (in a permanent sense), I would have no such compulsion to stick around. That's part of the reason I was toying with this, I wanted to see if I could come up with a logical argument against it that is 100% selfish, not relying on other people.
(splitting off to multiple replies due to LJs stupid comment char limit)
no subject
Date: 2008-03-01 01:55 pm (UTC)First, on the 'science hasn't proved it', that's okay, because it's not what's being discussed. The heart of a philosophical argument (if I'm recalling correctly) is "if you accept the premises, the argument should follow". As such, this argument can _only_ apply if you accept the premises, that multiple universes exist and operate in the manner described. If you don't, it doesn't mean the argument is faulty, it just means it doesn't apply. For the argument to apply to Gil, he has to believe in it. As to the other parts, I think I must have been unclear on one of my basic points. Gil _doesn't_ care about those alternates who spin off for them. But he _has_ to care about those alternates who _will_ spin off for him at some point in the future. Not all of them, certainly, only those who are likely to a certain point. The thrust of the argument was to define that (accepting the premises) once you reach the point of no return, almost _every_ possibility for you is a prolonged, possibly nearly infinite amount of suffering. This is because you, as an atheist, _cannot experience death_. You can only experience the lack of death. Once you reach death, experience stops, and that might be okay if there's only one of you, but if you're splitting off constantly, from your own perspective, there will be no end. The road to death may exist, but you won't be on it, it's blocked to you by definition. (this is probably the hardest part of the argument to express, and most open to different opinions. It makes perfect sense to me, but I can concede that others might disagree). And most of the ways you get a 'lack of death' after a point of no return is an extreme suffering, so Gil should be concerned enough to avoid it.
The other civilization is a hypothetical far future one of a type that's more of a SF thing. Those are the only people who might hypothetically be making any copies of Gil (save the universe itself, as it makes duplicates of everything, if you accept the premises). And if they do, you're right, they're likely to make the copy from before Gil got sick, which means it does no good to the one who's left making the decision, or they would correct it, in which case the rest of the argument about it being unlikely still applies.
(You do believe clinical depression is caused by a chemical imbalance in the brain, correct? No one who is just sad wants to commit suicide.)
I'd disagree slightly on that. Yes, clinical depression is often caused by a chemical imbalance in the brain, but there are plenty of reasons to want to commit suicide besides a chemical imbalance in the brain. As such, to me it's a factor, not an insignificant one either, but I do have to believe there is some level of choice all along the way (and even if there isn't, I still have to believe it, because I have no choice. ;))
Since the reply is getting long again I'll continue in another.
no subject
Date: 2008-03-01 02:10 pm (UTC)I have experienced depression and planned suicide, and to an extent still do (though it's much longer term). I wouldn't call it clinical depression because I've never been diagnosed and am extremely unlikely to go in and be diagnosed, but judging by what I've seen and read on the issue (and going by memory from doing so long ago, I'm not looking it up right now), I don't think what I have has very often reached the depths of clinical depression, but is more of a shallow, longer term one. I vaguely recall the term 'dysthymia' as something I once self-diagnosed myself. But hey, what do I know. In any event, I won't claim to be where you've been. Still, as I said, I don't believe intense mental illness is the _only_ way suicide comes about. It's possibly one of the more tragic, granted, and the argument may well apply much less there.
This is what made me question if you had experienced depression. I'm not sure this is true. Even a long, painful death I think would be preferable to most depressives when they are suicidal. My mother has brought up to me the possibility of me physically harming myself beyond repair during my suicide attempts saying things about harming major organs and being crippled the rest of my life. My response to her was if that happened, I would just try to kill myself again until I was successful or I was most likely such a vegetable that my brain only carried on rudimentary functioning and feelings weren't much of a problem anymore.
Of course, no one who is suicidal wants to experience a long, painful death, but in one of my online support-type groups a girl described how she was in so much emotional and mental pain that she curled up beneath a bush and when she was finally able to go to sleep, she expected never to wake up because she didn't see how someone could be in that much pain and still survive. To many, depression is already a long, painful death full of all sorts of incapacitation, even physical.
Again, I think I must have been unclear in that under my reasoning, (accepting the premises) from your own point of view death (even death of personality to the point where you're a vegetable) is not actually an option, ever. I left a little wiggle room in the argument because I wasn't entirely sure I could prove it, but in my own heart, that's the way I'm feeling it. So, again keeping in mind that the argument may not apply as well to mental illness, to me from a straight logic perspective, it's still a win proposition to avoid it - the choice is between an extreme duration of pain from which there is some possibility of getting better, even if you don't feel it and have to appreciate it intellectually (choosing to stay alive), and between an extreme duration of pain from which there is no real possibility of getting better (attempting a suicide).