Settings

Theme

High-res image reconstruction with latent diffusion models from human brain

github.com

459 points by trojan13 3 years ago · 169 comments

Reader

Aransentin 3 years ago

I immediately found the results suspect, and think I have found what is actually going on. The dataset it was trained on was 2770 images, minus 982 of those used for validation. I posit that the system did not actually read any pictures from the brains, but simply overfitted all the training images into the network itself. For example, if one looks at a picture of a teddy bear, you'd get an overfitted picture of another teddy bear from the training dataset instead.

The best evidence for this is a picture(1) from page 6 of the paper. Look at the second row. The building generated by 'mind reading' subject 2 and 4 look strikingly similar, but not very similar to the ground truth! From manually combing through the training dataset, I found a picture of a building that does look like that, and by scaling it down and cropping it exactly in the middle, it overlays rather closely(2) on the output that was ostensibly generated for an unrelated image.

If so, at most they found that looking at similar subjects light up similar regions of the brain, putting Stable Diffusion on top of it serves no purpose. At worst it's entirely cherry-picked coincidences.

1. https://i.imgur.com/ILCD2Mu.png

2. https://i.imgur.com/ftMlGq8.png

  • sillysaurusx 3 years ago

    I don’t get the criticism here. Normally I’d be the first to err on the side of skepticism, but this work seems above board.

    I think the confusion is that this model is generating “teddy bear” internally, not a photo of a teddy bear. I.e. the diffusion part was added for flair, not to generate the details of the images that exist inside your mind. They could just as easily have run print(“teddy bear”), but they’re sending it to diffusion instead of printing it to console.

    The fact that it can correctly discern between a dozen different outputs is pretty remarkable. And that’s all that this is showing. But that’s enough.

    It’s not really a “gotcha” to say that it’s showing an image from the training set. They could have replaced diffusion with showing a static image of a teddy bear.

    It sounds like this is many readers’ first time confronting the fact that scientists need to do these kinds of projects to get funding. As long as they’re not being intentionally deceptive, it seems fine. There’s a line between this and that ridiculous “rat brain flies plane” myth, and this seems above it.

    Disclaimer: I should probably read the paper in detail before posting this, but the criticism of “the building looks like a training image” is mostly what I’m responding to. There are only so many topics one can think about, and having a machine draw a dog when I’m thinking about my dog Pip is some next-level sci-fi “we live in the future” stuff. Even if it doesn’t look like Pip, does it really matter?

    Besides, it’s a matter of time till they correlate which parts of the brain are more prone to activating for specific details of the image you’re thinking about. Getting pose and color right would go a long way. So this is a resolution problem; we need more accurate brain sampling techniques, i.e. Neuralink. Then I’m sure diffusion will get a lot more of those details correct.

    • Aransentin 3 years ago

      Because pretty much everybody that reads the article will have taken away a grossly exaggerated idea of what the system is actually capable of. If Stable Diffusion was intentionally added "for flair" and really is unnecessary, then I would absolutely say that the researchers were being intentionally deceptive.

      Even if we do a massive goalpost-move and grant that the system is only identifying the label "dog" with a brain scan of a person looking at a dog, we would need to see actual statistics of its labelling accuracy before judging it in that way. If the images in the paper are cherry-picked(1), it could easily be only able to extract a handful of bits to no bits at all, and the entire thing could very well turn out the be replicable from random noise.

      (1) Note that the paper even states "We generated five images for each test image and selected the generated images with highest PSMs [perceptual similarity metrics].", so it even directly admits that the presented images are cherry-picked at least once.

      • williamcotton 3 years ago

        It’s more like this:

        We can take fMRI scans when people are looking at images and generate blurry blobs that do indeed resemble the images spatially.

        We can predict a text label of the image the person is looking at using another technique.

        If you use SD just on the text labels and you generate an image, you get the semantic content, but not the special content.

        If you combine the image and the text label and run it through an LDM then you get pictures that more closely match both the semantic and spatial characteristics of the images shown to the person.

        • sillysaurusx 3 years ago

          That’s my understanding as well. It all depends whether their technique really can do this. If it can, it’s solid work imo. If it can’t (better than random chance), then it’s bunk.

          There’s not much way to know other than to try it and see. But that’s true of almost every paper in ML. Some of them suck, some of them are great, but they all contribute something in their own way. Even “rat brain flies plane” paper (as much as I despise it) showed that you can change the values of mice neurons in a lab setting.

  • Hakkin 3 years ago

    I'm definitely not an expert in this subject, but even if the model is overfitted, doesn't the fact that it can pull out the similar images at all give credit to the idea that a larger, non-overfitted model could actually work as the paper describes? It means that there does exist some correlation between the shown subject, the captured fMRI data, and the resulting location in latent space.

    • Double_a_92 3 years ago

      The output part is basically nonsense. It would be more honest if the output was a text. E.g. "Teddybear" instead of a bad image of a random teddybear.

      • Hakkin 3 years ago

        In this specific case I agree, since the model may be overfitted, it seems like it's currently just a glorified object classifier based on what was in the training data, but the fact that it works at all may indicate that the underlying idea has merit. They would probably have to train a much larger network to see if it's able to separate features distinctly enough using the input fMRI data to be useful.

        • gus_massa 3 years ago

          The problem is that it's impossible to know what is in the fMRI data and what is hallucinated by the reconstruction.

          In this case, the real bear has a blue ribbon and the "reconstructed" bear ha a red ribbon. Is the ribbon in the fMRI data and the computer choose the wrong color, or most of the images in the training set had ribbons and the computer just added one.

          Imagine this something like this is used in the future to get something like https://en.wikipedia.org/wiki/Facial_composite . People may give too much importance to the details and arrest someone only because the computer imagined some detail, like the logo in the baseball cap.

          • mkagenius 3 years ago

            > Imagine this something like this is used in the future to get something like https://en.wikipedia.org/wiki/Facial_composite . People may give too much importance to the details and arrest someone only because the computer imagined some detail, like the logo in the baseball cap.

            Wow, tech not working to tech might kill someone went super fast here.

            • YeGoblynQueenne 3 years ago

              In the real world when tech doesn't work people die.

              OP is right to be concerned. This kind of tech (magickal mind-reading AI?!) is going to be bought up by security agencies, who wiil not understand its limitations and misuse it to accuse people of crimes they aren't related to.

              There is ample precedent. Just for one recent example see plans to use an "AI lie detector" based on discredited pseudo-science at EU borders:

              https://theintercept.com/2019/07/26/europe-border-control-ai...

              • gus_massa 3 years ago

                Exactly.

                For example plead read this old article very carefully: "Police Are Using DNA to Generate 3D Images of Suspects They've Never Seen" https://www.vice.com/en/article/pkgma8/police-are-using-dna-... HN discussion https://news.ycombinator.com/item?id=33527901 (6 points | 3 months ago | 1 comment)

                The picture is a high resolution image than make the system look accurate. They don't use the AI buzzword, but my guess it's only a mater of time. Anyway, the important paragraph is

                > Seeing the composite image with no context or knowledge of DNA phenotyping, can mislead people into believing that the suspect looks exactly like the DNA profile. “Many members of the public that see this generated image will be unaware that it's a digital approximation, that age, weight, hairstyle, and face shape may be very different, and that accuracy of skin/hair/eye color is approximate,” Schroeder said.

        • moron4hire 3 years ago

          It's not an object classifier at all. They had to text-prompt the system, first. I think the general idea is using the fMRI data as the pseudorandom initialization for the latent diffusion model to explore.

          From what I understand, regular Stable Diffusion starts by generating a noise and then hallucinating modifications of that noise to make less noise. The more you let it run, the better the results.

          So instead of just starting with a meaningless random noise, they're using the fMRI data to start. But if you didn't have the text prompt, you wouldn't get the right image. If you were looking at a cat but told it you were looking at a house, you'd probably end up with a small house, similar to one in its training set, positioned roughly where the cat was located in the original image.

          • Hakkin 3 years ago

            Briefly reading the paper, it seems they trained 2 models (using data from different stages in the visual cortex) to generate latent vectors for both the visual and textual representations of the fMRA data, then feed those into Stable Diffusion. Those are the models that would be overfit in this case, so instead of those models being able to encode features like "toy, animal, fluffy, brown, ears, nose, arms, legs" individually, it's likely just encoding all of those features combined into a generic "teddy bear" because the input dataset is too small. Obviously this is an oversimplification, but hopefully you get what I mean. I didn't mean it was literally an object classifier, but that the nature of a model like this, with a dataset so small, it does not have to ability to extrapolate fine details. With a larger dataset and more training, it may be able to actually do that.

        • dr_dshiv 3 years ago

          My colleagues did the same, but with EEG. This makes the technique much more accessible: https://arxiv.org/abs/2302.10121

          One open question in the field: how to assess the alignment of the AI outcomes across different methods?

      • angusturner 3 years ago

        Largely agree with this, although I think it would be interesting to formulate in terms of: "what is the mutual information between the fMRI scan and the stimulus".

        i.e) is there actually more information than a few bits encoding a crude object category, which stable diffusion then hallucinates the rest (/ uses to regurgitate an over-fit image)?

        Or are there many bits, corresponding spatially to different regions of the stimulus - allowing for some meaningful degree of generalization.

    • hiddencost 3 years ago

      Nope.

      If you train a model where the input is an integer between 1 and 10, and the output is a specific image from a set of ten, the model will be able to get zero loss on the task. That is what's happening here.

      • geysersam 3 years ago

        Yes but the input isn't an integer from 1 to 10 right? It's MRI data.

        Although it seems they're only able to extract the subject of the brain activity, not any actual "pictures".

      • darawk 3 years ago

        Are you saying the demonstrated results are all in sample? Because this is definitely not true for out of sample data. And the GP comment implies that there is in fact a validation/holdout set.

        • qumpis 3 years ago

          I'm also confused by this. If everything was done properly, test results on the holdout set would've been shown. Wasnt that the case?

      • radu_floricica 3 years ago

        It's still a legitimate direction to pursue. Once you get to large enough training sets, it's basically the same way our own brains work. We don't perceive or remember all the details of a building - just "building, style 19B", plus a few extra generic parameters like distance, angle, color and so on. Totally manageable for deep learning to recognize, and perhaps even combine.

      • williamcotton 3 years ago

        We performed visual reconstruction from fMRI signals using LDM in three simple steps as follows (Figure 2, middle). The only training required in our method is to construct linear models that map fMRI signals to each LDM component, and no training or fine-tuning of deep-learning models is needed. We used the default parameters of image- to-image and text-to-image codes provided by the authors of LDM 2, including the parameters used for the DDIM sam- pler. See Appendix A for details.

      • csomar 3 years ago

        But unless they tested this on a single human being; doesn't this mean that we can read brains (it's just this one particular reader is bad).

    • thedudeabides5 3 years ago

      Yes.

      It means there may be signal in the noise. Even if it's overfitting. Which makes sense.

      A sufficiently granular map of the human brain aught to be readable, if you know what the input and output signals are.

    • chaxor 3 years ago

      If things are being overfit you should typically make the model smaller - not larger.

  • kdma 3 years ago

    Good find, when I read it I called bullshit but I got lost trying to understand the diagrams. Another gotcha is the semantic decoder, they are just looping the model on itself "A cozy teddy bear" + fMRI random input => A teddy bear!!!

  • arnarbi 3 years ago

    Subject 4 in the first line also looks very different from the ground truth, but clearly an airliner. I'm curious if there is also a closer match to that one in the set.

  • brucethemoose2 3 years ago

    Its still picking out the correct "overfitted" images, which is remarkable.

    Theoretically, the results would scale to more training images... we just need to fMRI all of LAION-5B. Easy peasy.

  • sampo 3 years ago

    > The dataset it was trained on was 2770 images, minus 982 of those used for validation.

    I don't think you got that 2770 correct. Might be 9250 images, minus 982 (that one you got right). Then again, the paper is so badly written, I find it difficult to decipher what they did. From section 3.1:

    Briefly, NSD provides data acquired from a 7-Tesla fMRI scanner over 30–40 sessions during which each subject viewed three repetitions of 10,000 images. We analyzed data for four of the eight subjects who completed all imaging sessions (subj01, subj02, subj05, and subj07).

    We used 27,750 trials from NSD for each subject (2,250 trials out of the total 30,000 trials were not publicly released by NSD). For a subset of those trials (N=2,770 trials), 982 images were viewed by all four subjects. Those trials were used as the test dataset, while the remaining trials (N=24,980) were used as the training dataset.

    https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2....

  • 2-718-281-828 3 years ago

    there is also no way that you could represent details as shown with such a small sample.

  • bawolff 3 years ago

    Even if true, the result still seems very impressive to me as a layman.

    • gfaure 3 years ago

      That’s the whole problem — that the reconstruction aspect of the contributions seems overstated given only a layperson’s understanding.

  • SubiculumCode 3 years ago

    I feel like you might be moving the goal posts here a bit. Getting a reconstruction that is a bear, even if not the same bear, is impressive enough to be noteworthy.

    • xmonkee 3 years ago

      I think the point is that it's not a reconstruction. It's more like recognizing which letter of a thousand-letter alphabet is shown to the human after decoding their brain waves. Still impressive, but not really as impressive as visual reconstruction.

      • groestl 3 years ago

        TBH, I was not impressed up until now, but given the videos I have in mind from people trying to use brain computing interfaces to type a text, now I'm impressed.

        • pedrosorio 3 years ago

          fMRI is useless for that purpose - latency is much higher than any BCI method you might’ve seen in those videos

  • razor_router 3 years ago

    What evidence do you have that this technique is overfitting the training data rather than reading the brain?

  • ditchfieldcaleb 3 years ago

    What are you talking about? They didn't train a model for this. That's why it's so impressive.

    • Hakkin 3 years ago

      Quoting from the paper,

        The only training required in our method is to con-
        struct linear models that map fMRI signals to each LDM
        component, and no training or fine-tuning of deep-learning
        models is needed.
        
        ...
        
        To construct models from fMRI to the components of
        LDM, we used L2-regularized linear regression, and all
        models were built on a per subject basis. Weights were
        estimated from training data, and regularization parame-
        ters were explored during the training using 5-fold cross-
        validation.
2bitencryption 3 years ago

Are any of the example images novel, i.e. new to the model? Or is the model only reconstructing images it has already seen before?

Either way, if I'm understanding right, it's very impressive. If the only input to the model (after training) is a fMRI reading, and from that it can reconstruct an image, at the very least that shows it can strongly correlate brain patterns back to the original image.

It'd be even cooler (and scarier?) if it works for novel images. I wonder what the output would look like for an image the model had never seen before? Would a person looking at a clock produce a roughly clock-like image, or would it be noise?

All the usual skepticism to these models applies, of course. They are very good at hallucinating, and we are very good at applying our own meaning to their hallucinations.

  • andai 3 years ago

    There was a video many years ago (early 2010s?) demoing a similar technology, which would overlay and blend many images on top of each other to make a fuzzy image approximating what was actually being viewed.

    Edit: found it! https://youtu.be/nsjDnYxJ0bo

    • ricudis 3 years ago

      The youtube video quotes a paper by the same author, so it's probably the same group's work. I wonder why didn't they used an approach similar to the one in the video using SD - it looks more viable.

crispyambulance 3 years ago

In 1990, there was a train-wreck Wim Wenders movie that I loved and still love called "Until the End of the World". It was about a scientist (played by Max Von Sydow) who created a machine that could record someone's dreams or visual experiences directly from the brain and play them back even to a blind person. https://youtu.be/gilzgbdk300?t=442

Anyways, the images that were depicted in this work of fiction shot in 1990 about "the future" of 2000, had a very interesting look to them-- kind of distorted and dreamy like the images in the paper.

Are the images in the paper just a case of overfitting? ¯\_(ツ)_/¯ but it still makes me giddy remembering the Wim Wenders film.

  • dustractor 3 years ago

    Such a great soundtrack too! I rewatched it last week just for the jams. Also, for those into the glitch art genre, the dream sequences were WAY ahead of their time.

donohoe 3 years ago

As people and groups increasingly move this direction do we think about vectors for abuse in 10, 20 or 50+ years?

The human mind is considered the only place where we have true privacy. All these efforts are taking that away.

At this rate all notions of privacy will soon be dead.

  • andai 3 years ago

    There was a guy at MIT about ten years ago (edit: 2018! Woah) who made a headset that would read electrical impulses from your face. Apparently when people think in words, the same nerves fire as when they speak, just at a lower activation level. Using those signals it is possible to reconstruct the words being thought.

    I'm surprised it didn't seem to go anywhere.

    Edit: found it https://youtu.be/RuUSc53Xpeg

    • zamadatix 3 years ago

      The project page claims it's different process than thinking in words:

      > What exactly is “silent speech”? Does the user have to move his or her face or mouth to use the system?

      > Silent speech is different from either thinking of words or saying words out loud. Remember when you first learned to read? At first, you spoke the words you read out loud, but then you learned to voice them internally and silently. In order to then proceed to faster reading rates, you had to unlearn the “silent speaking” of the words you read. Silent speaking is a conscious effort to say a word, characterized by subtle movements of internal speech organs without actually voicing it.

      > Can this device read my mind? What about privacy?

      > No, this device cannot read your mind. The novelty of this system is that it reads signals from your facial and vocal cord muscles when you intentionally and silently voice words. The system does not have any direct and physical access to brain activity, and therefore cannot read a user's thoughts. It is crucial that the control over input resides absolutely with the user in all situations, and that such an interface not have access to a user's thoughts. The device only reads words that are deliberately silently spoken as inputs.

      https://www.media.mit.edu/projects/alterego/frequently-asked...

    • judge2020 3 years ago

      > I'm surprised it didn't seem to go anywhere.

      At least not publicly.

      • danem 3 years ago

        There have been some corporate research groups that have tried to take this approach further, and they all have more or less failed as far as I know.

    • Filligree 3 years ago

      People only think in words right before they say something, so I'm not sure how big a deal this is. I guess they'd be able to predict what I'm writing half a second before I write it?

      Would be useful if I lost the ability to write or speak, for whatever reason.

      • Symmetry 3 years ago

        The extent that people's thinking relies on inner monologue is something that varies wildly between different people. Likewise people's abilities to form mental images.

      • slingnow 3 years ago

        Citation definitely needed. I have a nearly constant internal monologue that is 100% composed of words.

        • joadha 3 years ago

          I only have that when I'm reading something and really trying to take it all in.

          Otherwise my internal monologue is a combination of notions, visions, and words.

          Do you think in complete sentences?

        • TechBro8615 3 years ago

          Do you know what it's going to say next? If not, who's generating each new word? And if yes, then isn't this just subvocalization?

          • l33t233372 3 years ago

            > If not, who's generating each new word

            Do you know what your next thought will be? If so, how? Did you think it before you thought it?

          • None4U 3 years ago

            > who's generating each new word?

            Thinking

      • judge2020 3 years ago

        Many people do have an internal monologue. The vector is that some police unit presents you with a login form (eg. for your password manager or encrypted filesystem), and you involuntarily think of the password, which this device reads and presents to them.

        • airstrike 3 years ago

          Joke's on them, my passwords are entirely unpronounceable

          • kzrdude 3 years ago

            only my fingers know my passwords. And no way all ten will rat me out

            • djmips 3 years ago

              oh yeah, maybe they'll put electrodes on your fingers! JK - this doesn't seem like a viable approach.

      • squeaky-clean 3 years ago

        I'm "thinking in words" this entire thread as I read it. Do some people read without hearing the words in their head?

        • TechBro8615 3 years ago

          Yeah, personally - and this might be a controversial opinion - I think most people who say they have an inner monologue are actually misattributing their experience of subvocalization while either reading text, or planning hypothetical conversations with other people (like a child might speak to their imaginary friend). It seems dubious to label that a monologue, because it depends on external stimuli (either the text you're reading, or your previous experience with people to whom you're imagining yourself speaking).

          You might classify talking to yourself as a monologue, but when most people discuss this topic, it sounds like they're describing a dialogue (i.e. one between multiple people). That seems crazy to me, because who does the other voice belong to?

          If you have an inner monologue, then by definition you should be able to predict what it's going to say - because mono- means just you. Yet people talk about this experience like there is some novel conversation happening in their head.

          It's frightening to think about a voice in my head that is one thought ahead of me. If you experience this, how can you possibly feel in control of your own mind? When does an inner monologue become schizophrenia?

          • burnished 3 years ago

            Wow, this is interesting, I thought pretty much everyone had an internal monologue. It feels like a colorblind person sharing the suspicion that everyone is just making up this other color spectrum.

            • berniedurfee 3 years ago

              Yeah, really interesting, who’s neurodivergent in this case?

              I guess we’ll soon have a device that can find out!

          • slackdog 3 years ago

            > or planning hypothetical conversations with other people

            It's not that, but it's related to that. It is done in the same manner, with you imagining what the other person would say.

            Have you heard of the "rubber duck" method of debugging? The idea is that you put a little rubber duck on your desk and whenever you get stumped by a problem, you explain the problem to the duck and, as you put the problem into words, your brain figures out the answer.

            Well it turns out that it works for many tricky problems besides programming, you don't really need the duck, you don't actually have to use your mouth, and you can do it entirely in your own head, having the exchange with an imagined "reasonable person" (who is just another aspect of yourself consciously playing the role.) The key insight is that language is a tool for thinking, expressing problems in language can help your brain reason them out. Once you realize this, you should be able to consciously choose to have conversations with yourself as a tool for figuring problems out.

            > That seems crazy to me, because who does the other voice belong to?

            Me, obviously. The process is that of authoring a dialogue. If you write a short fictional story about you explaining your problems to a wise sage who asks lots of questions and then tries to come up with a reasonable answer, who is the sage? It's your creation as an author. Now do this process without the pen, just in your head. Who is the sage? The sage is still your creation, it's still an aspect of you, slightly divorced from your ego because you're deliberately playing a roll when you imagine what such a sage would say about your situation. But it's obviously still you, it's not a foreign voice in your head disconnected from your conscious will. It's not schizophrenia, it's just a process of 'talking' problems out to figure them out.

            https://en.wikipedia.org/wiki/Rubber_duck_debugging

          • dragonwriter 3 years ago

            > Yeah, personally - and this might be a controversial opinion - I think most people who say they have an inner monologue are actually misattributing either their experience of reading text, or of planning hypothetical conversations with other people.

            No, we aren't. Now its true that (well, for me at least) the inner monologue is the exact same experience as when reading text or when planning hypothetical conversations. Or when thinking ahead of words to write. The difference is that it is not planning a hypothetical conversation or something to write, and there is no text being read, and it happens pretty much all the time, except when I'm doing one of those other things (and sometimes as an intrusive interruptions when I am.) If you imagine there exists a common piece of mental infrastructure that is used for each of those actions, its as if it was always on doing a narration except when you are specifically, actively concentrating on using it for some other purpose.

            > Hearing voices is schizophrenia.

            No, hearing voices that aren't there is an auditory hallucination. Among the things it can be a symptom of is schizophrenia, but "X can be a symptom of Y" is not the same thing as "X is Y".

            But an inner monologue is not an auditory hallucination. Its obviously and distinctly internal, not something that "sounds" like it is coming from outside.

            > You might classify talking to yourself as a monologue

            Because it literally is.

            > but when most people discuss this topic, it sounds like they're describing a dialogue (i.e. one between multiple people). That seems crazy to me, because who does the other voice belong to?

            I think a lot of people do what amounts to roleplaying out conversations, particular on decisions which are troublesome, with themselves; because this is similar to planning a hypothetic conversation with another person, which I gather people without inner monologues can do without outward speech, I'm not sure how connected it is to an inner monologue. From the perspective of someone with one, its a fairly easy deliberate "mode switch" where you basically decided that that is what the monologue is going to focus on.

            It's also conceivable that, within plural systems, there is what amounts to an "inner dialogue" or "inner multiparty conversation". Not being a plural system, I can't comment on that and the degree to which it is perceptually different than an inner monologue.

            > If you have an inner monologue, then by definition you should be able to predict what it's going to say - because mono- means just you.

            That...doesn't follow. But if you have an experience of a conscious thought of what you are about just instantaneously before you say it, an inner monologue is a lot like that, but without the follow-through of speech. That is, I think you are not only wrong that it is true "by definition" that you should be able to predict an inner monologue, and that this is a false analogy to external speech, but that an inner monologue is perceptually similar to, and may well be fundamentally resusing the same infrastructure as, the natural internal "prediction" (or planning; not sure those things are, in this case, different) of outward-directed speech.

        • ryukafalz 3 years ago

          This tends to start long threads with tons of anecdotes whenever it comes up but the answer is yes, some people do. (I usually do but not always, for example.)

          For lots and lots of discussion, you can search for things like this on e.g. Reddit: https://www.reddit.com/r/NoStupidQuestions/comments/nosdwt/d...

        • Baeocystin 3 years ago

          I don't have a word-based internal dialogue. When it's time to use words, like writing this post, sure, it's word time, so words are used.

          But if I'm just reading, I just take in chunks and phrases while constructing a meaning model. The sounds themselves (or even individual words) don't really enter into it.

          • berniedurfee 3 years ago

            That’s really fascinating and very different from my experience.

            It’d be a really interesting project to measure and classify people’s individual thinking mechanisms. That daemon that seems to exist at the boundary of the conscious and unconscious.

            Then again, maybe we wouldn’t want that as yet another data point to be bought and sold.

  • notfed 3 years ago

    If this technology becomes accessible to courtrooms or police, they will use it. There will never be a way to encrypt thoughts.

    • ryanjshaw 3 years ago

      Prediction: aphantasics will be in high demand for certain roles and activities

    • flockonus 3 years ago

      With all the amazing advances we've seen in the recent years, I'd hope people would now stop thinking "there will never be X".

      • notfed 3 years ago

        Ok, correction: there will never be a way to encrypt human thoughts. If we reach that point, we won't be human anymore.

    • maxerickson 3 years ago

      When they tell you to think about what you were doing at the bank on Tuesday, think about what you were doing at the bank on Friday.

  • omoikane 3 years ago

    If brainwave scanning reaches a point where those instruments are pervasive, I am sure tinfoil hats or some technology similar in spirit will advance accordingly.

  • 323 3 years ago

    Have you seen the size and cost of an fMRI machine?

    We are a long way away from worrying about this.

    Cheap cameras everywhere on the other hand...

  • polski-g 3 years ago

    There is an amazing novel called The Truth Machine by James L Halperin that speaks about this.

  • antegamisou 3 years ago

    > As people and groups increasingly move this direction do we think about vectors for abuse in 10, 20 or 50+ years?

    No, the delusional shortsighted and revenue-driven SV startup culture doesn't give a shit about such 'technophobic trivialities'.

    • ryanjshaw 3 years ago

      Yu Takagi, Shinji Nishimoto

      Graduate School of Frontier Biosciences, Osaka University, Japan

  • berniedurfee 3 years ago

    It feels like we’re at the point in the movie where someone travels back in time to warn humanity of the impending apocalypse soon to be unleashed by our insatiable appetite for technological advancement.

  • roarcher 3 years ago

    > do we think about vectors for abuse in 10, 20 or 50+ years?

    Of course, but what's to be done about it? Should we outlaw research like this?

    • berniedurfee 3 years ago

      No, but maybe we should think ahead and outlaw some of the activities that will abuse this technology.

      It’s not hard to imagine some really terrible ways this can be used.

      A bit of preemptive legislation might be wise as AI is advancing so rapidly.

  • KRAKRISMOTT 3 years ago

    A decade ago we had https://youtu.be/nsjDnYxJ0bo

  • userbinator 3 years ago

    This is something 1984 was slightly hinting at.

    Of course, these "advances" will be praised greatly in MSM as providing great benefits for mutes, "harmonious society", and whatever else happens to be the virtue-signaling fad of the moment.

gus_massa 3 years ago

In case someone miss it, there is a link to more info https://sites.google.com/view/stablediffusion-with-brain/ and to the preprint https://www.biorxiv.org/content/10.1101/2022.11.18.517004v2

ninesnines 3 years ago

I am suspicious of these results; if we blast a high frequency visual stimulus of a couple of letters and do quite a lot of post processing we can sometimes get a visual cortex map of those particular letters. However, these paper examples are very complex images and I’m very doubtful of the results - aransentin above made a couple of very valid points

Lutzb 3 years ago

Reminds me of this paper [1] from 2011. See it in action in [2]

1. https://www.cell.com/current-biology/fulltext/S0960-9822(11)...

2. https://www.youtube.com/watch?v=nsjDnYxJ0bo

Edit: Just realized the paper above is also from Shinji Nishimoto

smusamashah 3 years ago

There was this research where they reconstructed human face images from monkey brain scan. https://www.electronicproducts.com/scientists-reconstruct-im...

What's astonishing here is the quality of reconstruction. But I have not seen this research referenced a lot. Does someone how /why the reconstruction from monkey brain looks so perfect while we don't have anything close from human brain?

Edit: better images here https://www.newscientist.com/article/2133343-photos-of-human...

drzoltar 3 years ago

My understanding is that we won’t get a “mind reader” model out of this, because visual stimulus vs your imagination happen in separate parts of the brain. In other words we won’t be reading the minds of suspected criminals anytime soon. Maybe someone with neurology experience can chime in here? Is it even theoretically possible to see what’s happening in the imagination?

  • wongarsu 3 years ago

    In the best (worst?) case the method generalizes well, and you could just replace the training set of fMRI scans of people viewing images with fMRI scans of people asked to recall images they were shown previously, or fMRI scans of people told to imagine a scene based on a verbal description. It's rarely that easy though

  • giantrobot 3 years ago

    > In other words we won’t be reading the minds of suspected criminals anytime soon.

    Oh don't worry, this will get wrapped up in some pseudoscience bullshit and misleading statistics and marketed to law enforcement. But not to worry, at first it'll only be used on real bad criminals. If you have nothing to hide you have nothing to fear!

  • meindnoch 3 years ago

    I have a hunch that any sort of mind-reading machine would have to be tailored uniquely to the individual you want to probe. The internal neural representations likely develop uniquely for each individual.

rvnx 3 years ago

Creepy and cool at the same time. It goes into the bucket of things that are not ethically right, same ways as implanting chips to read monkeys brains. But technically interesting and well-executed.

  • levzettelin 3 years ago

    What's ethically wrong about this?

    • tiarafawn 3 years ago

      The end game here is developing a mind reading device. The endeavor device is ethically questionable because such a device would have a lot of ethically wrong/questionable applications.

      • arbitrage 3 years ago

        You're begging the question.

        • Dylan16807 3 years ago

          No they're not.

          They're not saying "X is bad because X is bad", which would be begging the question.

          They're saying X is bad because it leads to Y, and Y is bad. Y being bad is supposed to be common knowledge, so they didn't go into detail.

      • zepolen 3 years ago

        The only thing that's ethically questionable are humans themselves and such a device would likely do more to expose the unethical.

        Everybody remembers what happen when online dna hit mainstream.

        • Veen 3 years ago

          Yeah, that's just what we need: brain scanners purported to "expose the unethical". Nothing could go wrong with that.

          I find it extremely silly when people argue a technology is value neutral and humans are the problem so we shouldn't "judge" or critique the tech. Who do they think is going to use the technology?

        • nhchris 3 years ago

          I think only good thoughts so I'm not worried. Plus anyone with control of this technology is sure to have our best interests in mind.

          • the_af 3 years ago

            Exactly.

            Think good, happy thoughts. Happiness is mandatory. Being unhappy is treason. Treason is punishable by death.

            Have a happy daycycle, citizen!

  • not-my-account 3 years ago

    Why is it unethical to put chips in monkey brains?

    • burnished 3 years ago

      Well, for one, the lived experience for most science monkeys is "torture, execution, autopsy", and the reason we tamper with their brains is due to the similarity.

      I suspect others are taking the wide view but I wanted to point out this direct answer to your question.

      p.s the use of "torture" is intended in contrast to the neutrality of clinical language on this topic, not as a hint at my judgement on the matter.

    • Tomis02 3 years ago

      How did you get consent to put the chip in?

      • elil17 3 years ago

        Sound's like you're opposed to all animal research, not specifically brain-computer interface research. We also don't ask a monkey's consent before doing any other sort of experiment on it.

        • Tomis02 3 years ago

          I'm not necessarily opposed to it, I simply answered OPs question. It's unethical because there's no consent being given. However, humans do animal research regardless because they view it as "the end justifies the means", and they usually try to use humane methods (which is also not really an excuse because you can't ask the animal if it feels pain, or depression, or anything else).

          • gptgpp 3 years ago

            To be fair, technically it becomes ethical when your position is that "the end justifies the means."

            IMO Utilitarianism is a particularly dangerous ethical framework when wielded by narcissists who have a difficult time imagining that they might catastrophically fail (looking at you Sam Bankman-Fried), or might not be delivering salvation to the world (looking at you Elon, lol).

            If you tell yourself it's for "the good of humanity", or the alternative is destruction / widespread death, you can justify any action.

      • xattt 3 years ago

        You ask for consent through the brain-computer interface.

        /s

        • rvnx 3 years ago

          Conclusion from scientists: "If monkey drinks the smoothie, then it means he wants a surgery."

    • gptgpp 3 years ago

      Oooh I love moral philosophy.

      Here's one deontological perspective you could take:

      It is always wrong to cause unecessary suffering to others.

      Now, the subjective traits to be considered here are "unecessary" and "suffering."

      It used to be a common belief that animals lacked the capacity to suffer as humans do. They could feel pain, nocioception sure, but whether it caused complex psychological suffering (torment) used to be contenious.

      Today, this is certainly not contentious for our closest relatives. All primates possess a theory of mind, long memory, emotional states, complex social behavior like lasting bonds and altruism, and other traits necessary to suffer in significant ways.

      So the focus (as long as we are considering primates, obviously nobody cares about model species like drosophilia as they have a greatly dimished capacity to suffer (edit: although I should mention that ranking things based on capacity to suffer leads to pretty awful territories too. Eg, if a human is severely mentally disabled, is more permissible to experiment on them? I think most of us would say no, which raises the question why it's okay to do so for other species)) shifts to whether causing them immense physical pain and torture is necessary.

      And this is where I think things get pretty murky, and I will leave the rest up to you! I wish more people were curious about moral philosophy and creating their own consistent ethical framework for the world... I think it's especially important in science and engineering.

      Of course, you could use a different model like utilitarianism, but utilitarianism still requires some level of deontological principles or you end up with a pretty extremist moral philosophy (same goes for just having Kantian deontology with no room for utilitarianism, IMO).

      edit 2: come to think of it, Jainists would certainly have an issue with experimentation even on drosophilia, so I take that back that "nobody" cares. IIRC they even have a mouth covering to prevent swallowing and killing any insects that might accidentally fly into their mouths, as well as a specialized stick to gently move things like spiders and other insects out of the way. I know most people would scoff at that, but I think their deep respect for all forms of life is beautiful.

      • YeGoblynQueenne 3 years ago

        Nice analysis, shame about the cop-out ("I will leave the rest up to you!"). You're like- here's how to use morality, actually using it is left as an exercise to the reader :P

        Re Jainism, adherents practice lacto-vegetarianism, but they, for example, don't eat tubers because they consider them too advanced, if I understand correctly. A deep respect for all forms of life is hard to get right in a world where every living thing eats some other living thing, or dies.

        • gptgpp 3 years ago

          It was just getting way too long lol. I rarely see comments that length on this site. I could get some of my old university readings for you if you're interested? They come from a science and ethics course, there were some really good discussions on what makes animal research ethical, varying from it never being ethical, to it being ethical only if certain precautions are taken (minimizing pain, treating animals with dignity, not using them on frivolous things like cosmetics, etc.).

          edit: there are also just SO many ways moral philosophies start to diverge at that point. Like we're talking about what is "necessary" animal experimentation. It's an important question, and one that really does boil down to a personal exercise.

          Like... Personally I have no idea how to answer it. If you remove animal experimentation, well there goes a bunch of carcinogen studies which could result in a lot of human suffering. I also would need to do a ton of research to figure out if BCI research is at a level where primate brains are necessary instead of simpler organisms.

          I also need to examine my own lifestyle, since hypocrisy severely undermines moral positions and having integrity/cogent beliefs and actions is essential if we are to engage with these subjects honestly.

          For example, personally I have sometimes consumed meat in the last year (although I generally avoid it). Supporting factory farming absolutely violates my deontological moral imperative that "it is wrong to cause unnecessary suffering to others", in a ton of different ways. So who am I to espouse views on how people should behave with regards to animal research, when my own behaviour is in such a state of disarray?

          Anyways... Getting pretty long again lol. Hope that response is helpful I know it's a bit rambling.

          • YeGoblynQueenne 3 years ago

            Thanks :) for the time being I find more than I need by following links on wikipedia articles. I don't think I have the patience for a careful reading of moral philosophy literature.

            • gptgpp 3 years ago

              No problem! Just a heads up that one thing I've noticed about Wikipedia articles on philosophers is that they're not super accessible, and sometimes go on weird tangents.

              Like moral imperatives are essential to understanding Kantian Deontology, but the wikipedia article on it goes on a weird tangent about a "Global Economic Moral Imperative," which I have never head of before and is absolutely not something somebody trying to wrap their head around Kantian moral philosophy should be distracted with. I'm kind of annoyed it's even on there, it's absolutely not something Kant ever talked about.

              If you want a better highly detailed resource I would recommend the Stanford Encyclopedia of Philosophy at https://plato.stanford.edu

              But if that's too much (it is VERY detailed)... I can highly recommend chatGPT. For whatever reason it genuinely excels at philosophy. I've used it to discuss different absurdist philosophers before and it did an excellent job, which surprised me because I find it to be otherwise unreliable for a lot of subjects.

              You can ask it to compare and contrast philosophies like Utilitarianism, social contractionism, deontology, etc, and tell it to simplify or summarize things, it is impressive how good it is.

              Another approach is, also surprisingly, Youtube!

              The channel PBS Crash Course Philosophy is at the level of an introductory philosophy course at a University and has good episodes on concepts like Kant's Categorical Imperatives (a favorite of mine):

              https://www.youtube.com/watch?v=8bIys6JoEDw

              Also the channel the School of Life has fun little overviews of different philosophers that I can vouch for like this one:

              https://www.youtube.com/watch?v=xxrmOHJQRSs

              And for longer format documentaries the BBC has great documentaries like this one on Nietzche that are similarly entertaining:

              https://www.youtube.com/watch?v=u9f1F5jUzaM

              So I would recommend trying these different resources and seeing what combination you like.

              • YeGoblynQueenne 3 years ago

                Thanks. I think I prefer the Standord encyclopedia of philosophy (SEoP) to ChatGPT and youtube.

                For the record, my background is in mathematical logic (first order predicate calculus and all that) but from a computer science, rather than philosophical point of view, so I find the SEoP accessible. I just don't have any background in moral philosophy (except of course that I'm Greek and so grew up with the classics, because you can't avoid that).

00F_ 3 years ago

here we see, basically, a potential feedback loop. AI tools advance brain science -- more advanced brain science can then inform progress in AI. this is why the situation is dangerous: because people dont think about these feedback loops. people see AI and they move the goalposts and rationalize by saying that "cutting edge AI is still short of AGI so its ok." but most normal people dont think about how AI can be used to create AI or how AI could be used to revolutionize all kinds of fields that then plug back into AI. this is a very dangerous, non-linear space. its not the first non-linear space we have traversed but its certainly the least linear space we have ever entered into and it is the highest stakes humanity has ever or will ever deal with.

even if this is just another bullshit article, im just making a point related to it. people need to be worried about this. for the first time in history, lots of people are now creeped out by AI. but they arent taking action or demanding change. we need regulation, grass-roots efforts to stop AI. even if the only way humanity could abort AI as a concept, or delay it for a significant amount of time, was to return to the iron age, and it certainly isnt the only way, it would be unambiguously worth it, in every way and from every angle.

AI requires large compute. what we are doing now was impossible just 20 years ago. if not 20 then 30. you cant manufacture that kind of compute in your garage. global regulation would take care of it no problem. at the very least it would buy us an enormous amount of time that we could use to figure something else out. people always say that some hold-out country would defy global regulations. they wouldnt defy NATO, let alone a super-global coalition. and the idea of such a group or NATO enforcing compute regulations is not far-fetched whatsoever because the emergence of AGI or even advanced non-AGI goes against the interests of literally every human being. there is no group of humans that benefit from that ultimately. the problem is simply waking people up to this plain fact.

  • TechBro8615 3 years ago

    When in human history have we ever been able to stop technological advancement?

    > we need regulation

    No, we don't. Regulation doesn't stop technological progress - it puts it in the hands of an elite few. And besides, there are 130+ regulatory jurisdictions. For example, the US government doesn't fund human cloning research, but that doesn't mean China won't fund it. Or perhaps you'd also like a one world government that can jail anyone doing wrongthink on their GPU?

    Personally, I hope we get AGI (in the most Kurzweillian sense) as soon as possible. It will lead to a cambrian explosion of advancements across all fields of science. This is our best chance of cracking the secrets of the universe and answering fundamental questions, like whether FTL interstellar travel is truly impossible, or whether aging is really irreversible.

    Imagine an intelligence unencumbered by the "technical debt" we've accrued over centuries of building our scientific model of the world. AGI could simulate infinitely many novel paths through the "tech tree" of human history, replaying our scientific discoveries and trying different assumptions. What if we had 12 fingers and mathematics started from a base-12 system? What if we could see in infrared? We would have followed entirely different scientific paths; AGI will be able to find what we missed.

    • 00F_ 3 years ago

      from your comment i am pretty confident that you are around 19 years old. it doesnt seem like you actually read my comment. i guess i will respond to you but there is no comment in the world that could set you straight. you need more experience and personal development before you could begin to understand this topic.

      someone once said "do not see things as they are, see them as they might be." this quote is really about discoveries and the tendency of humans to only see things in terms of what already exists. and the implication is that humans have a big blind spot for seeing whats next. thats why we need a motivational quote to help us to see things as they might be rather than simply as they are. it is true that there has never been a global regulation or ban like the one we are talking about except maybe ozone layer emissions. but by this same metric, AGI can never exist because it has never existed before. its a silly response and a complete waste of time. even if such regulations already existed for other things in the past, you still would be here saying it was impossible but for some other reason. the key here is to make up your mind last, not first.

      regulation can stop anything as long as it doesnt break the laws of physics. and, if you had read my comment, i explain why china wouldnt pursue AGI. even if china did pusue AGI, they probably wouldnt be able to crack it. none of the major breakthroughs have come out of china.

      "i hope we get AGI as soon as possible. [it will lead to many incredible things]." you have no idea what AGI will lead to. you just cherry pick all the cool stuff that would be possible but totally ignore all of the other implications. there would be an immediate and total power vacuum caused by the advancements. these advancements would be so huge that it would change the geopolitical equation beyond recognition. the concept of a country would probably be economically and geopolitically untenable. there would have to be a transition to an entirely new order where the dominant meta-organisms arent countries but some bizarre AGI conglomerate that looks like an expressionist painting in comparison to what we have now. the transition to this new world, whatever it looks like, would involve war. probably the biggest war that has ever happened. this is intrinsic and unavoidable. it cannot be disproved or denied. the fundamental economic and geopolitical equation that underlies the current equilibrium would change suddenly and violently.

      the current world order will disappear, you will probably lose everything you own and everyone you love along with your country. a global war will break out where there is a high chance that all established rules of engagement are ignored. weapons or methods that render the environment unlivable to humans will more than likely be used because the dominant organisms and meta-organisms wont need humans in any practical sense. and after the dust settles and a new equilibrium is reached, the existence of humans will end very quickly (if it hadnt already) because we will offer nothing of value anymore and if our existence presents the slightest inconvenience to the machines, they will allow us to die. and that is just the scenario where they are apathetic towards us. i have not even begun to discuss the repulsive, grotesque nature of our suffering if we ever are the subject of AGI malice. those possibilities are always brushed aside as fear mongering so i dont even bring them up. but they should play into our decision to move forward or not.

      at the very best, we will somehow manage to attach ourselves as parasites to the new machine meta-organisms and experience an existence with no agency or purpose other than to ogle at the machines. but that wont happen because the machines will immediately embark on doing things that humans could never, ever understand.

      "what if we had 12 fingers [...]." what if indeed. perhaps i was too hasty... no cost is too high in pursuing the deeper mysteries of the universe.

      • nfgrep 3 years ago

        > you have no idea what AGI will lead to

        Neither do you? None of us do, in fact I’d imagine the people trying for AGI right now would have a better guess than you or I.

        > there would be an immediate and total power vacuum caused by the advancements. these advancements would be so huge that it would change the geopolitical equation beyond recognition.

        This sounds like you’re assuming someone will flip a switch one day and the most powerful mind in history will be let loose. I’m not sure AGI will advance that fast. We might have alot of incredibly “stupid” iterations of AGIs first, for many years before a clever one rolls around.

        > this is intrinsic and unavoidable. it cannot be disproved or denied.

        Were all just making assumptions here, I don’t think yours get to be called “intrinsic and unavoidable”.

        I understand the concerns here, but if you’re willing to claim the end of the world, I would suggest basing your claims on something, or atleast making your assumptions explicit. E.g. “assuming we achieve AGI, and its equipped to rapidly become more powerful/intelligent than the whole of the human population…)

        • 00F_ 3 years ago

          you can predict the behavior of complex systems axiomatically. my predictions are very general because they are axiomatic. the most important axiom in play is that natural selection will guide the development and behaviour of the creatures of the singularity. there may be points of friction that cause small deviations from this path, such as the total effort of all humans post-pandoras-box, but the ultimate shape of things is inevitable. these are assumptions in name only.

          there are many possibilities so the idea that we get an outcome that is good for us is unlikely. its just basic probability. i think people get hung up on this because there isnt an example of it to reference in history.

          of course AGI will immediately rocket upward. the only way it wouldnt is if it were created in total secrecy and held in perfect captivity forever. laughable. all that is needed is for word to get out that AGI has been created and it would be re-created the next day somewhere else. and one iteration of it would rocket upward. AGI, once created, is intrinsically unstable.

          the burden of evidence and proof is on you, not me. we know what things will be like without AGI. it is only right for the people who advocate for the creation of sentient machines to produce evidence that they will not open the doors to a living nightmare. the same thing should have been done with nuclear weapons. it really makes me scratch my head when people demand evidence from me as if i were the one encroaching. you are right, people are only making assumptions when they talk about the singularity. and the idea that we will not bitterly regret the singularity is the most tenuous assumption of all. until they show up with something more substantive i will be firmly against the creation of AGI.

      • TechBro8615 3 years ago

        > from your comment i am pretty confident that you are around 19 years old... you need more experience and personal development before you could begin to understand this topic

        And from the first paragraph of your comment, I didn't read the rest of it. Have a nice day (or don't).

      • armatav 3 years ago

        You can’t stop it - certainly not with regulation; so why so much concern for the “geopolitical equation”?

        Fear won’t change that - and we are at least 20 years away from a neuromorphic revolution.

        You’ve got enough time to come to grips with it.

        • 00F_ 3 years ago

          lets say there was a global coalition of countries that considered the creation or advancement of AGI a material threat to the safety of all humans. this would qualify countries that do advanced AI research for retaliation from NATO or other international bodies that might be created. it is clear that AI is only able to move forward in an environment of cheap compute and international academic collaboration. progress in AI would slow to a snails pace if feature size were regulated, total compute were regulated like carbon emissions and explicit research on AI was banned. it would be an environment of very expensive compute and no mainstream research or collaboration. this would at the very least buy us massive amounts of time. can you say anything substantive to show otherwise?

          lol neuromorphic revolution. thats cute.

          • armatav 3 years ago

            How exactly are you planning on regulating that in China, smartass?

            Or did you forget a factor in that geopolitical equation of yours?

            • 00F_ 3 years ago

              i have no idea why people go straight to china every time regulating AI is brought up. its something to do with a rudimentary understanding of geopolitics i think. china. the answer to your question is the same answer to the same question regarding any country: a healthy majority of world powers forming a military-backed coalition would definitely stop outliers from carrying out the type of research that is at the leading edge of the current AI explosion. the chinese government is already worried about AGI so its ironic that everyone imagines them to be the outlier because in fact they would probably one of the first and most enthusiastic members of such a coalition. any country that wanted to resist, and that is highly unlikely given the fact that the need for regulation will become blindingly clear with every surge forward, would much rather cooperate than pursue far-fetched geopolitical strategies that involve AGI. most countries dont even output enough research to qualify for sanctions.

              nobody has ever offered a lucid and axiomatic argument that shows regulation cannot work. there are two options, TRY to regulate or face a living nightmare where neither the best nor the worst outcome is even close to acceptable. it is so blindingly obvious that it boggles my mind: the only reasonable, rational response is to try to regulate, slow or stop AGI.

              edit: i have a guilty confession. i was looking through your comments. i saw that you said that humans must eat meat to be healthy. i was surprised to see that and i want to tell you that i completely agree with you and i have often tried to explain to people why this is true and its like talking to a brick wall. there arent very many people who seem to get this even though its blindingly obvious. just wanted to give you a little encouragement to keep the fight going on the meat thing.

              • armatav 3 years ago

                Haha, yeah that guy had a thing against “carnists”. Deranged.

                I don’t think as the tech curve goes up there will be a long enough time period, even with a globally enforced military pact, to stop the rise of the machine. My reasoning is that there are more than enough clandestine organizations and families with a vested interest in pursuing the “power” it brings, a lot of them with a whole lot of control over these countries in our pact.

                To each their own. Keep eating meat.

politician 3 years ago

Show HN: Human Diffusion

Hi everybody! We’re Joe and Ahmed and super thrilled to be launching Human Diffusion today! We’ve built an exciting new image generation system that supports economies in developing nations.

Our product leverages the latent creativity of humanity by directly fitting employees with fMRI rigs and presenting them with text inquiries through our API (JavaScript SDK available, Python soon!). Unlike competing alternatives we preserve human jobs in an era of AI supremacy.

I’d like to address rumors that our facilities amount to slaving brains to machines. This is a gross misunderstanding of the benefits we offer to our staff - they are family. Our 18 hour shifts are finely calibrated based on feedback collected through our API, and any suggestion of exploitation is flatly untrue.

Send us an email (satire@humandiffusion.com) to get early access.

Madmallard 3 years ago

Couldn't we train an AI with FMRI or EEG with like billions of samples of people thinking and describing what they're thinking about and have it gradually train some level of accuracy?

samuelzxu 3 years ago

There's also this paper with very similar methodology called Mind-Vis, and also accepted to CVPR 2023. https://mind-vis.github.io/

rvz 3 years ago

Another small step into creating a worse dystopia than the one we are already living in.

Please continue. /s Governments, three letter agencies and the like would be absolutely excited to see this. The future that no-one has asked for.

exclipy 3 years ago

In 2004, I wrote a short story about exactly this in high school. Using neural networks to "mind read" visual images from an fMRI scan of a brain. I thought it was farfetched, but look where we are now!

dheera 3 years ago

I wonder how well this would work with wearable brainwave detectors rather than MRI, seeing as MRI isn't really something I could have at home.

  • babblingfish 3 years ago

    By brainwave detector I am going to assume you mean an EEG. An EEG measures electrical activity at the surface of the brain. A fMRI shows the activity of individual neurons of the entire brain in real time. It's sort of an apples to oranges comparison given the tools measure different things at vastly different resolutions.

  • jejeyyy77 3 years ago

    any idea what the best consumer brainwave detectors are on the market rn?

bitL 3 years ago

Can't wait to this becoming one of individual performance metrics recording all brain states all the time (video/audio/etc.) and be a part of regular performance reviews...

ACV001 3 years ago

This is big thing. Although this particular paper is not big thing, the many related quoted studies, set a trend.

fretime 3 years ago

I'm looking forward to it. When will the code be released, Thanks

chrstphrknwtn 3 years ago

I don't see anything "high-res" about the reconstructed images.

_448 3 years ago

So this is like mind reading?

lazy_moderator1 3 years ago

not the first time something like this ended up on HN

https://news.ycombinator.com/item?id=33632337

convolvatron 3 years ago

very curious about the little 'semantic model' at the bottom of the brain. does anyone know how that gets constructed and how it gets fed into the results?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection