Settings

Theme

A major milestone has been reached in our Brain Preservation Prize

blog.brainpreservation.org

64 points by porejide 11 years ago · 58 comments

Reader

maaku 11 years ago

Note that this approach should be compared with cryonics, which attempts to vitrify the brain and preserve it in a glass-like state at liquid nitrogen temperatures:

http://www.alcor.org/

Personally I have some reservations about the plastination approach to personal longevity. Depending on your philosophical views on the nature of consciousness, it may be that under this procedure you would die and cease to exist, while some future emulation of your brain thinks it is you -- i.e. biological-you and future-emulated-you are two separate people that just share memories.

It is however interesting science and may be a short-cut path to getting necessary scanning resolution for whole brain emulation.

  • danieltillett 11 years ago

    Isn't this the same as saying you are a different person every morning and that the person that went to sleep and the person that woke up are separate people?

    • sanoli 11 years ago

      No, its very probably not the same. The brain doesn't shut off when you go to sleep. It keeps going, it keeps doing it's stuff, messing with memories, thoughts, all the bodily functions and a whole extra stuff that we don't know too well, but which probably still makes you you. Killing that brain and then simulating it somewhere else is not the same.

      • fallous 11 years ago

        More importantly, does anyone suggest that running the original brain and the copy doesn't result in two separate diverged intellects? If running two results in separate consciousnesses then obviously they are not the same.

        • x1798DE 11 years ago

          I don't think that follows. If "your" consciousness is just the consciousness that has a continuity of memory with a previous version of you, then the two copies would both be you, but they would not be one another.

          • maaku 11 years ago

            Let's say that I a suitably advanced fMRI is developed that is able to map out the connectome non-destructively, I use this device on you to create a copy of your mental state, and then let you go about your day. At some point later I turn on a whole-brain simulation from this data. What do you, the you-that-walked-into-the-scanner expect to experience?

            • erroneousfunk 11 years ago

              You'd experience nothing unusual.

              • maaku 11 years ago

                I agree. But the further implication is that for the same reasons, if your brain is plasticised and later scanned and turned into an uploaded whole-brain emulation, you'd still be dead. Uploading is not a pathway to personal longevity, or whatever you want to call continuation-of-me-not-just-my-memories.

                • JoeAltmaier 11 years ago

                  That's the crux of it all, isn't it? Is your self any more than your personality and memories? If so, then you'll have to resort to a soul or some such. If not, then the upload is really 'you', or a copy anyway.

                  And who's to say a soul wouldn't attach to a copy anyway? Souls are not that well understood. Perhaps it would be fooled by the copy, or have an affinity for it, or some such. As long as we're speculating.

                  • maaku 11 years ago

                    No, there are plenty of perfectly reasonable physical theories for the nature of consciousness that don't equate identity with memories and don't involve souls. There's no reason to resort to dualism.

                    For example, there is the identity-is-the-instance-of-computation theory which says that it is not the information being computed (memories) that is relevant, but the computation itself.

                    • JoeAltmaier 11 years ago

                      Agreed, the hardware/wetware is just as important as the bits being uploaded. Especially for chemical brains that store much of personality as neural wiring.

                      But lets say that's uploaded as well, as part of the 'program' details. Then where are we? An 'instance' of this is not actionably different from any other, if it behaves exactly the same. Its arguable that they are the 'same person' in some sense.

                      • maaku 11 years ago

                        It matters to the person who is now dead and not living on in the machine.

                        • JoeAltmaier 11 years ago

                          But they are living on! Kind of. Like you are, in that body of yours, once all the cells are replaced by new cells every decade or so. Its ok; you still sound like the same person.

                          • maaku 11 years ago

                            Another strawman. No one is claiming that identity is tied to the molecules that make up the body, even in aggregate. There's a sense in which a car remains the same car even after continuing comprehensive maintenance has replaced every single part, but that car stays different from the next car off the production line. Does that example make sense?

                            • JoeAltmaier 11 years ago

                              Come on! If its a new car, its a different car. Doesn't matter how convoluted the path to get there (replace every part or build new). Not a strawman; an example pointed right at the argument that 'a copy isn't the same thing'. Be fair.

                              • maaku 11 years ago

                                You be fair too. The instance-of-computation model of personal identity allows for cells of your brain to come and go, but as long as the whole thing is operating continuously, you remain. It is exactly analogous to my car example.

                                • JoeAltmaier 11 years ago

                                  Its also exactly analogous to my build-an-entirely-new-one and program it exactly as the previous one was programmed. It has exactly the same result. If I did it without anyone looking, they would never be able to tell the difference.

                                  • fallous 11 years ago

                                    Except the original that you replaced with your copy. You're opting for an external functional description of identity but the discussion is about the individual.

                                    It should be understand as conceded that a perfect copy of me would pass any Turing-style test applied by an external auditor that the copy is me, but that doesn't mean that I am the copy.

                                    • JoeAltmaier 11 years ago

                                      Its different in a sense, sure. But consider: if I replaced it so perfectly that it was atom-by-atom identical, then God himself would not be able to say if it was you or not. Unless we admit to some external agency that defines 'you' that is not present in the mechanism e.g. a soul.

        • maaku 11 years ago

          That's obvious to me, but I've learned that it's not obvious to everybody. It's an instance of the mind projection fallacy that your or I think that is a simple obvious truth but others think the opposite is just as intuitive. I sometimes wonder if different people have different experiences of consciousness and self-identity...

      • aperrien 11 years ago

        What about what happens during hypothermia? Both during Cardiopulmonary bypass and cold water immersion, the brain is cooled enough that activity stops.

    • M8 11 years ago

      I personaly subscribe to that opinion. Even if the change is negligible. This is why I would like to be sober and conscious in case of any uploading.

  • eli_gottlieb 11 years ago

    To recap for everyone joining the conversation afresh: our intuitions about selfhood, continual consciousness, and personal identity probably don't make any sense at all if applied to this kind of question. The right thing to do isn't trying to repair the intuitions, but instead trying to figure out which of the unintuitive possibilities we happen to like.

  • Shorel 11 years ago

    What about the Ship of Theseus paradox applied to brains?

    If you don't 'feel' your death but all parts in your brain are eventually replaced by 'equivalent' ones, are you still yourself?

  • hvs 11 years ago

    Buddhism has some interesting views on the nature of consciousness and "selfness". I think this is an interesting opportunity to think about what being "you" really means.

iLoch 11 years ago

One step closer to me being able to spin up multiple instances of my brain to work on my side project and play video games with.

  • ttty 11 years ago

    Please add a load balancer too. I hope we can have an interface like amazon ec2, but for brains

    • vidarh 11 years ago

      That would have been a somewhat less ridiculous explanation for The Matrix. "Oh, those vats full of humans? They're our neural network cloud".

      • jessaustin 11 years ago

        Yes, and considering that The Fall of Hyperion was published in 1990, that scenario should have been obvious to the Wachowskis.

        • eli_gottlieb 11 years ago

          It was actually what they intended, before the execs told them audiences wouldn't understand it.

          • vidarh 11 years ago

            Not surprising, considering I had relatively techie friends that were absolutely mindblown by concepts like pervasive virtual reality (never mind taking the next step beyond the Matrix to "uploading"). I was utterly taken aback at the realisation of just how foreign ideas like that were to people that weren't steeped in SF.

          • jessaustin 11 years ago

            "Humans are used as batteries."

            "Humans are used as computer chips."

            Neither statement really makes any sense without some creative thinking. Execs are weird.

    • toomuchtodo 11 years ago

      That postulates that everyone's brain is worth simulating.

      In the beginning, only the most "valuable" neural mappings will be worth the computing time required to simulate them. And rights! Does a simulated neural network of a human being have the same rights as a human being? One couldn't argue that simply because the underlying processor is silicon-based instead of carbon-based rights are lost.

      Many philosophical questions to answer.

  • sanoli 11 years ago

    Or spin up an instance with the 'lazyness' region deactivated.

Udo 11 years ago

We just has a plastination vs. cryonics debate (https://news.ycombinator.com/item?id=9595853) which might be of interest here.

danieltillett 11 years ago

I am glad progress is being made here, but until we can avoid the destruction that occurs in the last 24 hours of expected death (eg cancer), or the damage that occurs with unexpected death (eg heart attack or trauma) from sitting at room temperature for hours after death, then all we are going to be preserving is grey mush.

  • Udo 11 years ago

    There has to be enough information left over to reconstruct a person in "high enough fidelity" (whatever we may decide that means in the future).

    A brain just sitting there for hours after death at room temperature isn't idea - however, there is some good news in the area. It turns out that the most destruction happening to a brain after an ischemic episode is actually due to a cascade triggered by eventual re-perfusion. Since the dead brain is never re-perfused, this cascade is never triggered. Cellular decay after death makes a biological re-animation infeasible, but speaking as someone who did prepare a lot of neuro slides at uni, it takes more than a few hours for the structure itself to decay heavily, so we should be good for a scan/upload scenario.

    Pertaining to the article, the first hours after death are nowhere near as problematic to the brain's information content as the plastination procedure they're using!

    Brain trauma before death is another matter. Since we don't have the capability to create backups or checkpoints of our neuronal structure, what's physically destroyed is simply lost beyond recovery. However, for example in aggressive brain cancers, a functional copy of the person might still be recovered in principle even if their neocortex was severely compromised, as long as the actual data is still there.

  • ianpurton 11 years ago

    If people learn to take regular backups this wouldn't be a problem.

  • jimrandomh 11 years ago

    I agree. But I also think this work (and future work like it) will help with that; having convincing proof that a preservation process works will make it easier to get obstacles to prompt preservation out of the way.

  • imaginenore 11 years ago

    You're assuming this "grey mush" can't be recovered. We don't actually know that. A sufficiently advanced AI should be able to recover a person from way less.

    • davidgerard 11 years ago

      You're assuming the phrase "a sufficiently advanced AI" answers anything at all.

      Presumably you're assuming if the information is there at all - if the necessary data hasn't been scrambled beyond the noise floor of the scrambling process - then there's something for magic (because you're really talking about magic here) to work with.

      So, please (a) set out your claim with precision (b) back up your claim.

      * What is the information you need to recover?

      * To what degree is it scrambled?

      * What of it is scrambled below the noise floor of the process?

      * How do you know all this? (wrong answer: "here's a LessWrong/Alcor page." right answer: "here's something from a relevant neuroscientist.")

      For comparison: even a nigh-magical superintelligent AI can't recover an ice sculpture from the bucket of water it's melted into. It is in fact possible to just lose information. So, since you're making this claim, I'd like you to quantify just what you think the damage actually is.

      • SCHiM 11 years ago

        I'm quite sure I've read somewhere that information cannot be lost in the absolute sense, lost to us: yes, lost irrevocably and irrefutably: no.

        In that sense, 'a sufficiently advanced AI' is not magic, because when people say that they definitively have something in mind, at least the people I often discuss this with do.

        In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.

        If you want to be theoretical about it, then yes. There is probably an upper bound on how smart/big an AI mind can possibly be. And thus there is a limit on how much information it can extract from arbitrary systems. So I agree with your assertion that there is information that even the smartest of all AIs cannot possibly reconstruct, but I'm not sure that the brain is such a structure.

        Any justification about why/how 'a sufficiently advanced AI' could come about is more questionable.

        Many knowledgeable people are making guesses based on our current understanding of intelligence/computation/AI, and then extrapolating. The paradoxical thing is that on the one hand AI-doomsday speakers tell us no to anthropomorphise (for good reasons) with the motives of an AI, but on the other hand apply human reasoning/understanding to predict such machines/patterns.

        • davidgerard 11 years ago

          > I'm quite sure I've read somewhere that information cannot be lost in the absolute sense, lost to us: yes, lost irrevocably and irrefutably: no.

          This is probably not quite at the requested standard of backing up a claim, and sounds very like "but you can't prove it isn't true!" But I'm not the one making a claim.

          In any case, please back up your claim. What is "the absolute sense"? How does it differ from "in a practical sense", with examples?

          > In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.

          Noise floor. In this case, thermal noise.

          Also, you literally can't know that much about all the molecules in your puddle of goo. (Heisenberg.) We do not live in a Newtonian universe.

          • SCHiM 11 years ago

            http://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_i...

            http://phys.org/news/2014-09-entropy-black-holes.html

            http://phys.org/news/2014-09-black-hole-thermodynamics.html

            Ben Crowell, phd in physics:

            http://physics.stackexchange.com/questions/83731/entropy-inc...

            The reason I didn't/don't back up those claims is because I'm really not knowledgeable about those subjects. I'm not sure what good sources are/how legitimate they are, but I have read it one day. Even if I cannot interpret the technical jargon behind it and/or give more nuance to my claim due to a low understanding of the subject.

            Given a bit of googeling to "can information be lost", "conservation of information" one finds the articles I linked to above.

            But you have dodged my refutal of your initial claim, the one I was really responding to, that: 'sufficiently advanced AI' is not just a stop-gap-word for magic. Because in this case it doesn't stand for "I don't know how or why but this and this", but instead it stands for "I don't know why(in the motivational sense), but given a bigger brain one can use and interpret finer instruments, which in turn enables us to extrapolate further back in time".

            • davidgerard 11 years ago

              None of your links support your claim that winding the clock back is even theoretically possible, and the stackexchange link seems to say it isn't: "The resolution is that entropy isn't a measure of the total information content of a system, it's a measure of the amount of hidden information, i.e., information that is inaccessible to macroscopic measurements." Even if you're assuming a physical God, that physical God can't get good enough measurements.

              • SCHiM 11 years ago

                I think that perhaps our views of the world are slight off-kilter/incompatible.

                I agree with you that even godlike-AI must have an upper bound on what they can extract from a 'puddle of atoms'. It's obvious that given a handful of atoms it's not possible to predict what happened to a completely different bunch of atoms 5 billion years ago at the other side of the (observable) universe. That's also not what I'm claiming.

                What I do claim is that, given enough smarts, it's possible to do this to a bunch of molecules present in the brain-goo.

                I'm assuming here that whatever it is that makes the brain 'tick' is located on the molecular level, and not a lower level.

                As to your claim of being able to 'turn back time', don't we do this all the time?

                If we look at the link we've both referenced, say we had two pictures of the last milliseconds of the book falling, and we knew the exact time between when these pictures were taken then we can turn back the time right? We know exactly how/when/where the book was if we can interpret those pictures.

                In a similar way, the information about the locations of the molecules in the 'brain goo' is available to a 'sufficiently advanced AI'. Thus what I'm arguing is that this is not information that is 'lost' in the way that we've been discussing so far.

                Therefore it's also not 'magic' when people refer to such AI, because when they do they have this in mind. Not some law-bending/breaking super godlike-ai, but rather a system with the resources needed to stitch together the complete video from the last two images.

                • davidgerard 11 years ago

                  > I think that perhaps our views of the world are slight off-kilter/incompatible.

                  Yeah, possibly. I blame LessWrong fatigue. It's an entire site made of handwavy claims that, no matter how far you trace back through the links, never quite actually get backed up. So I tend to be harsh on similar claims, particularly when they appear to be from that sphere (judging by the buzzword "sufficiently advanced AI", which is in practice used to put forward outlandish claims and then try to reverse the burden of proof).

                  I actually started reading the site because of a friend who was getting into cryonics. I'd hitherto been neutral-to-positive on the idea, but the more I investigated it the more I went "what the hell is this rubbish." (Writeup is at http://rationalwiki.org/wiki/Cryonics which is a very middling article, and is still about the best critical article available on the subject ...) The handwavy claims are endemic, quite a few rely on effective magic (actual answers from cryonicist: "But, nanobots!" or "sufficiently advanced AI") and it really is largely just ill-supported guff, even if I'm being super-charitable to the arguments. Extracting a disprovable claim is nearly bloody impossible itself.

                  > As to your claim of being able to 'turn back time', don't we do this all the time? >If we look at the link we've both referenced, say we had two pictures of the last milliseconds of the book falling, and we knew the exact time between when these pictures were taken then we can turn back the time right? We know exactly how/when/where the book was if we can interpret those pictures.

                  But we couldn't do that if the data had been destroyed. That's the claim way up there: the information is recoverable from the mashed-up goo. The two pictures have been destroyed, we have the book sitting on the floor, there's nothing to reconstruct the fall in sufficient detail.

                  I say this because whenever I've seen an actual neuroscientist who's been asked this sort of question (can we recover the information with a magic AI or whatever), they answer "wtf, no, it's been utterly trashed. No, not even in theory. You can't even measure it. It's been trashed utterly." The questioner usually comes back with "but if we use a SUFFICIENTLY ADVANCED AI ..." i.e., if we let them assert their conclusion. And first they'd have to show you could measure stuff on the nanometre scale without messing it up. Let alone, e.g., reconstructing the precise locations of proteins in a cell after they've been denatured by cryoprotectant. Remember that it's a claim about physical reality that's being made here.

                  (A couple of examples, from scientists who would LOVE to be able to preserve and get back this information: http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson... http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyo... )

                  >In a similar way, the information about the locations of the molecules in the 'brain goo' is available to a 'sufficiently advanced AI'.

                  Remember that there is no way to distinguish two molecules of the same substance. You're requiring more information than can actually be measured (Heisenberg).

        • eli_gottlieb 11 years ago

          >I'm quite sure I've read somewhere that information cannot be lost in the absolute sense,

          Quantum information is, in some decently well-regarded theories, a conserved quantity. Classical information is not, in any major theory.

          >In short: if you're smart(fast, precise, determined) enough to look at the individual molecules of a puddle of brain-goo. And if you can infer the way it has collapsed by ray tracing those molecules back through how they collided with each other/the walls of your mold then it should be possible to reconstruct the spatial form of the brains at least. That's a pretty big IF obviously, but equally obviously not impossible. If only you can look deep/far/fast enough.

          Again: classical information is not a conserved quantity. A puddle of brain-goo will likely tell you more than a puddle of non-brain goo about the person who used to be that brain, but there is a very strong limit to what it can tell you. You cannot, so to speak, extrapolate the universe from one small piece of fairy cake.

          (Disclaimer: I have previously donated to the Brain Preservation Foundation precisely because I think the issue deserves investigation by mainstream, non-wishful scientists so that people who want to... whatever it is they're planning on, can do it.)

          >Many knowledgeable people are making guesses based on our current understanding of intelligence/computation/AI, and then extrapolating. The paradoxical thing is that on the one hand AI-doomsday speakers tell us no to anthropomorphise (for good reasons) with the motives of an AI, but on the other hand apply human reasoning/understanding to predict such machines/patterns.

          The thing about "sufficiently advanced AI" is that it dodges the basic issues. A sufficiently advanced AI is just a machine for crunching data into generalized theories. It can only learn theories in the presence of data. Admittedly, the more data it gets from a broader variety of domains, the more it can form abstract theories that give usefully informed prior knowledge about what it can expect to find in new domains. But if it can use detailed knowledge about brains-in-general to reconstruct a puddle of brain-goo into a solid model of a human brain, solid enough to "make it live", that's not because of some ontologically basic "smartness" about the AI, it's because the AI has the right kind of learning machinery for crunching data about specific and general things together to allow it to learn and utilize very large sums of domain knowledge. These sums could possibly larger than any individual human might obtain in the course of a single 20-year education, from kindergarten to PhD, but the key factor in "AI's" understanding of the natural sciences will ultimately be experimental data and domain knowledge derived from experimental data.

      • atrus 11 years ago

        Sure it can. Look at a photograph, make a mould and find a freezer :P

      • M8 11 years ago

        If it's a godlike AI it can simulate humankind in reverse and resurrect everyone's consciousness before each person's death.

        • eli_gottlieb 11 years ago

          That is absolute bunk that defies the basic principles of computational complexity and information theory in almost every possible way.

    • danieltillett 11 years ago

      The problem is the information is encoded in the arrangements of the neural connections. Once your brain has reached the grey mush stage all those connection are lost.

      • htns 11 years ago

        Aren't neurons relatively sizable? I don't see why they would totally "decay" in a short time frame, but maybe it's because I'm ignorant.

    • M8 11 years ago

      Such AI will only be able to recover a gradient set of possible personalities depending on the percentage of missing data. Something like 1000000 of slightly different permutations of you.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection