Please consider a donation to the Higher Intellect project. See https://preterhuman.net/donate.php or the Donate to Higher Intellect page for more info.

Albert-Jan Brouwer on Personal Identity

From Higher Intellect Wiki
Revision as of 00:35, 12 August 2019 by Netfreak (talk | contribs) (Created page with "<pre> Albert-Jan Brouwer on Personal Identity The essay below was written by Albert-Jan Brouwer in response to a question posted to sci.cryonics in January 1995. (Detailed Ne...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Albert-Jan Brouwer on Personal Identity

The essay below was written by Albert-Jan Brouwer in response to a question
posted to sci.cryonics in January 1995. (Detailed News header information is
appears below.)
  ------------------------------------------------------------------------
[email protected] wrote:
> OK, I'm new to this cryonics (...) and am wrestling
> with the following questions:

To answer your questions, and perhaps add a little to the "uploading" debate
I will first try to sketch an alternative way of looking at these matters.
It is not essentially different from what has been argued before, but it is
a nice visual model that makes certain aspects more explicit.

One scientifically plausible assumption that needs to be made is that one's
mind is fully specified by the material brain; its structure, chemical
balances and neural impulses.

If so, the biological brain can in principle be emulated, and information
can be extracted from it, and put into the emulation so as to create a
duplicate mind which is operationally equivalent (Assuming a finite amount
of information will do, and the purported relevance of quantum correlations
or other Penrose type mechanisms are the book-selling bull shit they seem to
be. In short, the mind should be computable.)

What makes me ME is the particular connections the synapses make, the
neuronal thresholds, the neuro-transmitter concentrations, and so on,
irrespective of whether these are present in a biological system or in a
computer emulation. For brevity let's call this collection of defining
properties the "state-of-mind".

Now it is convenient to think of this state-of-mind as being specified by a
point in a space, the space containing all possible states-of-mind. Choose
the space in such a way that the distance between two points increases as
the difference between the corresponding states-of-mind increases.

Thoughts, experiences and so on will then cause the state-of-mind to change
so that it moves along a path in this space. Sudden brain damage will cause
the path to be discontinuous; the state-of-mind will jump as memories,
abilities and context are suddenly distroyed.

Obviously, on a much smaller scale, discontinuities are present since for
example the firing of neurons is discrete. But when viewed from sufficient
"distance", the movement of the state-of-mind associated with normal
thoughts and experiences will appear continuous.

> Now let's say I'm asleep on my death bed and have instructed/paid
> the keeper of this technology to make N copies of myself while
> asleep.

Then there would be N+1 identical states-of-mind, all occupying the same
point in "mind space". After the copies have been made, and your biological
you lives on, its state-of-mind will move away the position of the copies.
After a while the gap will be greater than any discontinuity one normally
incurs during life (e.g. due to a narcosis) making the copies useless since
they no longer represent anything continuous with your biological self.

> If I wake up (live a little longer) and find N copies of myself
> around, it is clear none of them are me any more than they would
> be if I'd died.

If the copies are run, their paths will start to diverge causing the copies
to take on seperate identities. (For computerized copies this need not
necessarily be the case provided they are fed identical experiences and run
on deterministic hardware). Note though that it is doubtful that running
multiple copies will ever be acceptable legally.

> If my biological body dies and N copies "wake up", where exactly
> am "I"?
> The only "I" of interest is dead. That copies exist with my
> feelings or memories offers me no solace.

If there is no difference between the copies and the biological you at the
moment it died, you essentially continue to live on as the copies. After
all, there is no discontinuity in the path covered by your state-of-mind
even though there might be a passage of time between your death and the
activation of the copies.

If however your biological body has substantial thoughts and experiences
after the copies are made, clearly its state-of-mind will be lost. The
"distance" covered by the state-of-mind since the copying can be arbitrarily
small thus giving rise to "degrees of death".

What would be the maximal acceptable degree? I personally feel that any
conscious period longer than a couple of seconds is sufficient to make the
copy outdated. After all, during that time I could think something like
"Hey, I am thinking. These thoughts I am thinking will be lost! Why should I
feel empathy with a copy of what used to be me? Stop this!". :-)

In view of this, there is not much point in making a copy and hang on to the
brain. Since the technology required to make a copy while destroying the
(frozen) brain seems much nearer than non-destructive copying, this is
actually a good thing.

> The only kind of immortality or survival that makes sense to me
> is the single-threaded, and hence, biological kind.

In terms of the model described above, not-dying involves making sure your
state-of-mind keeps on following a continuous path. Single-threaded is
indeed an apt description. How this state-of-mind is implemented in the
physical world is irrelevant. Similarly, it is irrelevant if the
state-of-mind is frozen in "real time" temporarily, since the path it
follows will remain continuous.

Some sobering remarks to conclude:

Currently, sufficiently detailed knowledge of the subcellular workings of
the brain does not yet exist, sufficient computing power will not be
available for several decades and the technology required to cheaply map 100
billion neurons at nanoscopic resolution is even further away. So for the
time being the only option is to freeze the brain and hope this does not
irrecoverably destroy the state-of-mind.

---
Albert-Jan Brouwer EMail : [email protected]
  ------------------------------------------------------------------------
News Header Information

Path:network.ucsd.edu!ihnp4.ucsd.edu!dog.ee.lbl.gov!agate!howland.reston.ans.net!EU.net!sun4nl!news.nic.surfnet.nl!highway.LeidenUniv.nl!rulhm1!ajbrouw

From: [email protected] (Albert-Jan Brouwer)
Newsgroups: sci.cryonics
Subject: Re: UPLOADING QUESTION
Date: 22 Jan 1995 00:43:15 GMT
Organization: Leiden University, The Netherlands
Lines: 120
Message-ID: <[email protected]>
References: <[email protected]>
NNTP-Posting-Host: rulhm1.leidenuniv.nl
  ------------------------------------------------------------------------

[Image]

  ------------------------------------------------------------------------
brouwer_essay.html . . . . . . . . 1/25/95 . . . . . . . Joe Strout