Neurons in a dish learn to play Pong
nature.comA question for someone who understands the neurology and biology of this much more than I do:
Once I have a group of neurons like this trained to do something, can I actually count on them to continue performing that task until they die? Or is it possible they spontaneously reorganize or "learn" a previously unseen behavioral pattern?
They're biological. A lot can happen :) Remember this includes weirdness like viruses, tumors, aging... as well as specifically neural stuff like habituation (diminished sensitivity to repeated stimulation), sensitisation (increased sensitivity to repeated stimulation), spontaneous misfiring, reorganisation (basically creating/destroying or upweighting/downweighting synapses) etc etc etc.
Of course, then you'll need to debug problems by dripping antidepressants (or psychedelics, or stimulants, or depressants, or various hormones, or....) into the Petri dish.
Biology is wonderfully crazy.
“Did you send your computer in for repair?”
“No, therapy.”
Robopsychology is an emerging field which will become increasingly important in coming decades.
TIL I’m somewhat of a Petri dish myself.
Skip the SSRIs and let them loose on Red Dead 2. In a couple weeks they'll be talking to a horse in ways we never imagined.
So the short answer is no
Hm. That's a very good question. I haven't kept up with neurobiology research but I imagine as long as the same neurons keep their connections to the same other neurons then it would be possible for them to keep performing the task. And neurons will do keep these connections if the stimuli was strong enough and showed up repeatedly.
But when the task is not required for a very long time the circuit that turns this task on will weaken, at least in the brain. And neurons will be forming other connections, but that doesn't mean they will necessarily "forget" about older ones.
Here is a journal article that might be relevant for you:
https://www.cell.com/cell/fulltext/S0092-8674(14)01362-2
Abstract: ”Neuronal plasticity in the brain is greatly enhanced during critical periods early in life and was long thought to be rather limited thereafter. Studies in primary sensory areas of the neocortex have revealed a substantial degree of plasticity in the mature brain, too. Often, plasticity in the adult neocortex lies dormant but can be reactivated by modifications of sensory input or sensory-motor interactions, which alter the level and pattern of activity in cortical circuits. Such interventions, potentially in combination with drugs targeting molecular brakes on plasticity present in the adult brain, might help recovery of function in the injured or diseased brain.”
A related question: can I indefinitely test they're performing their task correctly? When operating these neurons for actual tasks, real time challenge-response-like testing is probably a good idea. But if they can learn, eventually they may be able to trick the tests. How would one design automated tests to evaluate the state of learning-capable systems?
>How would one design automated tests to evaluate the state of learning-capable systems?
A question many call center managers ask themselves.
The nightmare of AI is it's a black box. The nightmare of people is that they're people. (or, per Sartre, "hell is other people" ... but that's such a freighted human perspective).
Gee if only we could grow meat to eat in a lab and grow brains on substrate to answer phone calls and generate images, we could do away with the complexity of other humans we might have to interact with. Except... you have even less understanding of what motivates that clump of cells than you do what motivates your office secretary.
Don't get me wrong, it's a super cool idea. I'm just not sure exactly when bioethics went completely out the window.
Welcome to the field of QA automation
What's more, you can apply evolution to your Petri dish.
Disclaimer: Also not a neurologist.
I believe they certainly will reorganize in case of a (local nutrient) scarcity or damage event. That might result in "unseen" patterns as well, and by "unseen" I mean kinda random.
Glial cells are fascinating components of the brain.
Artificial NeuroGlial Networks could be really interesting.
I guess they can become tired also, just like humans. (Not a biologist)
i would urge people to read the paper instead. The 'learning' is a bit iffy , and this was meant to test the brain theory of Friston rather than plug neurons into pong. Still, great job on the neurotechnology involved and a step in the direction where we should be going, controlling large numbers of neurons
Ars Technica has a better article, although they dont describe the dense electode array correctly: https://arstechnica.com/science/2022/10/a-dish-of-neurons-ma...
For those who like me are wondering where those neurons come from. An answer can be found in the Ars Technica article:
“[...] the researchers tested two types of neurons: some dissected from mouse embryos, and others produced by inducing human stem cells to form neurons. ”
Finally a sliver of future I'm psyched about. Induce stem cells to form neurons? Next step induce new neurons to form new connections in existing brain!?
That would be cool, though current research on adult neurogenesis says we keep getting new neurons throughout life anyway
Interesting!
Basically all the men on my dad's side die of alzheimer's, so any neuroprotective, regenerative, or some more accurate verb to stop or reverse function loss is super fascinating to me. Others as well.
Plus who wouldn't want to get just a bit smarter or help get that youthful memory back; can't be the only one who is foggier ;)
Yes but given that mouse neurons were used as a base, you now have a strong craving for cheese.
Fun fact: mice do not particularly like cheese[0].
[0] http://news.bbc.co.uk/2/hi/uk_news/england/manchester/531921...
:)
It would be so cool if we can get a grasp on memory, epigenetics and all the other science I don't understand.
Because that doesn't seem so far fetched! if mice do in fact have a biological 'need for cheese' - the sequel
Mice prefer grain based stuff like tortilla, bread, or peanut butter.
...and cables.
Genuine question: why "should" we be going in this direction?
Is curiosity a good answer for you? For me it is.
To fight neurodegenerative diseases
Except the lab doing this make no mention of treating disease. They want to "shortcut machine learning" by using living tissue to perform tasks: https://corticallabs.com/
Well they sound like mad scientists. Hopefully The research is still helpful for medical advancements
because it's bottom-up, as opposed to mouse studies that impose our own ideas and record from a tiny fraction of brain cells
post-apocalyptic horrible scenarios aside, there are plenty of therapeutic reasons why it would be nice to have neuron-level control of brain tissue.
Because we can, and if we don't someone less ethical will.
"if we don't someone less ethical will."
But they still will, why wouldn't they if supposed ethical researchers do as well. I've heard this sort of argument before, and I understand the reasoning behind it.
We do what we must, because we can.
This is about 40x more cells than they used to fly a fighter jet in a simulation back in 2004. https://www.nature.com/articles/nrn1572 .
I suppose the claim to fame in this similar study is the use of the title organoid and there's some legitimacy to that. Form and function are intimately tied in the brain and just a bunch of neurons on a petri dish isn't quite an organ.
Gosh, that study. It was the first time my faith in academia was severely shaken. As a 19yo, I spent around 6 hours reverse engineering what was going on, and then fell into a spiral of “Is this what academia is all about? Really?” Because I was so fascinated by the concept, then it was such a letdown…
It took some time to appreciate that there are some worthwhile ideas in that paper. But it was my first experience with “Academic lies about accomplishment to secure further funding.”
It’s hard to say whether “lie” or “severely exaggerate” is appropriate.
A method existed to measure a signal from a neuron. A second method existed to modify that signal. Whenever the plane crashed, the signal was modified slightly, until the plane flew level.
It didn’t learn to fly. The researcher modified a neuron (steady signal) until it gave the appropriate signal (e.g. zero) to fly level.
Negative signal, plane banks left. Positive signal, plane banks right. If plane crashes, modify neuron until signal is neither positive nor negative. That was the extent of the study.
In that context, do you feel like the neurons learned to fly? Maybe. It’s certainly similar in spirit to reinforcement learning in modern times. But I wouldn’t say that setting a signal to zero is a nice definition of “flew a plane”.
In other words, there was no active feedback; if you pointed the plane in a slightly different direction, it would immediately crash. It wasn’t doing anything more than setting the signal slowly over time to an answer that was known ahead of time. (Keep the plane level by not moving the controls.)
Suppose the neurons learned to draw a straight line. That was essentially what was being demonstrated here. If you substitute “plane crash” with “line becomes a curve”, it becomes much less exciting, to say the least.
“Isn’t that just learning to set a signal to a constant value?” “Yep” “Will it always become the same constant?” “Yep” “Can’t we already do that?” “Yep”
I was so disillusioned that it took many years to stop believing that academia itself was at fault for misinforming the public so badly. After all, it’s almost two decades later, and people still believe “rat brain flies plane” happened in 2004.
If I could go back in time, I’d tell myself not to worry about it; focus on the academics that are working quietly on the frontier, not the ones trying to raise funding for their lab.
At least the neurons in today’s study actually learned to play something. But if the past is any indication, I’d err on the side of skepticism.
Do you know the actual study which did this?
Because the Nature citation on where the work was done is literally "The Discovery Channel".
It was big news at the time. Headlines everywhere. Rat brain flies plane. You’d think that within a few years, maybe rat brains would be the primary way we’d control planes.
I did manage to track down the original study, which at the time wasn’t too easy; this was before scihub. I wish I’d saved it for posterity. My motivation went from “this is exactly what I want to do with my life” to laying on the couch wondering what the heck could possibly be happening, when the whole world believes rat brains are flying planes, vs what was actually demonstrated in the paper.
If you do find it, please post it. It’d be a nice stroll through memory lane, and an interesting retrospective on how to get funding for one’s own research lab. (A handy skill to have, if you don’t mind… well, exaggerating, to say the least.)
I think it might be this one:
Adaptive flight control with living neuronal networks on microelectrode arrays
The brain is perhaps one of the most robust and fault tolerant computational devices in existence and yet little is known about its mechanisms. Microelectrode arrays have recently been developed in which the computational properties of networks of living neurons can be studied in detail. In this paper we report work investigating the ability of living neurons to act as a set of neuronal weights which were used to control the flight of a simulated aircraft. These weights were manipulated via high frequency stimulation inputs to produce a system in which a living neuronal network would "learn" to control an aircraft for straight and level flight.
https://ieeexplore.ieee.org/document/1556108
https://sci-hub.ru/https://doi.org/10.1109/IJCNN.2005.155610...
Thank you so much! That’s the exact one. I’ll upload it to imgur, since scihub urls tend to die over time: https://imgur.com/a/vdQP17I
The details are presented at the end of page 2. You can see on page 3 that the whole thing is a glorified “I can make neurons go to zero!” machine.
It kills me because there is real value in this work — the ability to modify neurons is a useful thing to be able to do. Showing that you can set them to specific values is worthwhile.
But that wasn’t what it was presented as. The focus was on the idea that the neurons somehow learned something. Maybe they did, and maybe time will prove me a fool by showing that this is how neurons learn in general. But it was so disheartening to spend hours carefully going over every detail, full of excitement at the possibility of machines that can learn… only to realize I could do the same thing by setting a value to zero in python, and that the complicated language seemed designed to conceal this.
I remember seeing a video on the discovery channel where they were interviewing him. He gave a dazzling account of the implications of this work.
Maybe he was predicting the upcoming ML boom. But now that I’ve worked in ML for a few years, I can safely say that this work didn’t help us get here. Maybe it helped biologists figure out how to control the signals that neurons deliver.
(I still think that it’s pretty darn cool that you can manipulate neurons at all, so hats off to whoever figured out how to do that. I just wish it was presented as what it was: the ability to set a value to zero over time via neurons.)
I really want to make a joke like “You know what else would set neurons to zero? Set them on fire and wait,” but the paper showed they could be set to arbitrary values. Which of course was the interesting and valuable part in the first place, not the plane stuff. But planes make good headlines.
Thomas DeMarse is mentioned in some of the 2004 news articles. His Google Scholar profile has the following short paper published at a conference in 2005: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Thom...
>(Discovery Channel, USA, 22 October 2004)
That's a bit like saying battle bots solved locomotion of robots.
Funny how everybody here panics about 'the hell' of experimenting with human brains that could possibly grow consciousness. What about the actual hell we put animals through so that we can eat them? There are also studies suggesting that, for example, crows and other animals have some form of consciousness:
https://www.independent.co.uk/news/science/crows-consciousne...
https://www.smithsonianmag.com/smart-news/do-crows-possess-f...
You don't need to imagine hell, it's already here.
> he work is a proof of principle that neurons in a dish can learn and exhibit basic signs of intelligence
Absolutely not. This shows that you can teach neurons to exhibit a reflex behavior adapted to the given stimuli - which shouldn’t come as a surprise. At best this is jellyfish-like level of intelligence.
Have you seen magic tricks revealed? It's such a let down. After that you can't unsee how it's done and it becomes a regular guy concealing some information. Before that there was a mystery how the heck it was done. It's the same thing going on here, both with AI and the understanding of the wetware mind.
Except with AI and wetware there's still a mystery remaining. Not all cases have a rigorous description of what's going on in the middle level of abstraction.
Low level: proteins/matrices.
Middle level: ???
High level: it thinks!
Every new AI gets demoted pretty quickly. Playing pong would have been once called AI.
That’s because once we understand it and achieve it it’s no longer considered intelligence. The same will happen when we have actual human robots that are indistinguishable from real humans. People will say “obviously not intelligence or anything related to it, just doing x, y, and z”
Clearly, the sheep they dream of are electric.
And then one looks into the mirror :)
Transformers have shown that language development can also be explained by what you derisively call reflex behavior, with no need for specialized Chomskyan recursion circuitry, as the same mechanism with slight modification (decision transformers) also plays Pong.
There's a huge difference between fitting a probabilistic model to a data distribution then sampling from it (what GPT-3 is) and agents that invent language and use it to communicate.
Not much. A transformer trained on multiple senses can learn the sound that an animal makes and associate it with seeing that animal. It can also learn how another agent reacts after it says a word.
The huge difference is actually between animal reflexes and learned behavior. Reflex is built-in. I didn't learn to kick my leg in response to a tap on the patellar tendon.
I agree that a Transformer is an example of a "reflexive" behavior because it learns to react in a context (via gradient descent rather than evolution as the learning algorithm). It's a conditional categorical distribution on steroids.
I also agree it's not much different than what's going on in this petri dish with pong.
But I don't think that's a profound statement.
What I'm saying is that calling what a Transformer does "language development" isn't accurate. A Transformer can't "develop" language in that sense, it can only learn "reflexive" behavior from the data distribution it's trained on (it could never have produced that data distribution itself without the data existing in the first place).
> I agree that a Transformer is an example of a "reflexive"
I said that it is not reflexive. It is learned. Just because after you learn something, it becomes easy does not mean that it is a reflex. I explained why language development can be done with little more than a transformer learning from how others behave when you make an utterance and from how you behave when you hear something, like a decision transformer learning what happens after it takes certain actions in Pong.
Agreed....but! This opens some interesting thoughts about various things.
Sure this Petri dish is not intelligent and conscious. But it's just a load of neurons right, just like the Petri dish in our head. Are we intelligent and conscious in that case? Or are we just a scaled up Petri dish that reacts to more varied stimuli?
This leads to a circular problem: if we resist the notion that we're just scaled-up Petri dish brains (perhaps because we "feel conscious") how can we be sure the Petri dish mini-brains are so different from us? In which case, are they conscious after all, to some degree?
They are using human neurons for this? The one substrate we can be sure of is capable of producing consciousness?
Yeah this makes me really uncomfortable. We have no idea really what causes consciousness to arise, but "enough actual animal/human neurons to perform complex tasks" doesn't seem like an unlikely way to do it to me.
"Unimaginable suffering in a petri dish" is not something I want even a tiny chance of creating.
To evaluate these chances, one has to understand what “suffering” is. It may (or may not) be purely evolutional thing which cannot be easily reproduced in a dish. Perhaps the whole thing is a plague that only emerges in/for natural-life conditions. While the configuration space is likely(?) almost infinite, the natural selector is only one.
Well.... no. It's only necessary to know whether suffering exists (it does). After that it's sufficient to consider whether there's a greater than zero chance of creating it (there seems to be).
That's enough to say "let's not do this until we have better data".
If it turns out the second step is unknowable, that's enough to say "let's never do this".
These are subjective decisions, and some may suggest that there are gains that outweigh the potential suffering. I am currently unconvinced.
But it's always unknowable. The only consciousness/suffering one truly knows is one's own. We just use heuristics like "this human/animal behaves like I do when suffering". Where we draw the line is completely arbitrary.
My second step still stands :)
The unique data-point here is that we are creating these "mini brains". I'm not invoking a knee-jerk "sCiEnTiSTs pLaY g0d OMG!" reaction, but I will invoke Goldblum's Objection: "your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should".
Like several others in this thread, I'm bemused that this passed bio-ethics. Drawing the line isn't entirely arbitrary, ethics panels do it every day. We already have the tools, but it seems that in this case we don't know whether to apply them, or which to apply.
I agree and am unconvinced too. But currently we also agree to create suffering at the industrial scale in mice. Pretty strange choice of principles.
Indeed. Although some of us try to opt out of that stuff too, as much as possible.
The ethical questions raised by anomalous situations like this indicate that our historical decisions may need revising.
> After that it's sufficient to consider whether there's a greater than zero chance of creating it (there seems to be).
That is a Kantian, deontological approach, an ethical framework not shared by all. The utilitarian would ask how much suffering would be caused as compared to how much good it would produce.
Erm no. I'm not advocating morality driven by "duty". I'm pointing out the pragmatic issue that we don't know where we're heading, so perhaps we should pause and have a good hard think about the possibilities :)
Utilitarianism also comes with its own issues, of course. Exploitation of minorities and the disempowered, for example (including manufactured brains-in-vats, it seems).
Pong playing? Infinite good!
As a researcher in responsible AI, I wonder what they told their ethics committee, if anything.
Given that they took skin cells, cultured them to pluripotent stem cells, then convinced the stem cells to grow neurons, I don't think they grew an interoceptive cortex, basal ganglia, or other brain areas necessary for emotional experience as we know it. "Exhibit sentience" is... well, let's call it a wordplay.
Interesting question!
This seems like a research area so horrific it could exist only in a comic book. This type of thing should not be funded or researched. I'm surprised it needs to be said but we shouldn't grow human brains to experiment on them. Maybe if we could understand and quantify consciousness and be sure we weren't subjecting something to hell - but certainly not while we can't do that.
What makes you think that experiments on living rodents aren’t hell already? What is the crucial ethical difference between these species, apart from two facts, first that a full human has a legal status, second that most people will relate to an empty human brain more than to a non-empty non-human one (regardless of packaging).
I understand legal/relatable parts, but is there more to it?
One scientist studies tumors by causing tumors in rats. Another scientist studies tumors by causing them in humans. If you said to the second scientist "Hey, that's wrong, stop!" And he said "What's the crucial difference between rats and humans?" Would you be convinced by that line of argument?
This hypothetical situation implies something I wouldn’t fail to ask first. Are these humans persons or are they “empty”? I don’t see a difference between a {person,experience}-less human and a rat (in this setting, legal and parental implications aside), but maybe you do. Both are able to experience pain and negative emotions. One of them being of a [non-]familiar kind changes nothing to me.
Iow, put yourself into the shoes of that rat or that empty brain and tell:
- which one would you choose to be (if given no other option),
- why,
- and what you expect to sense differently.
Well, is that not basically the vegan argument against suffering, in any form, for any type of animal or sentient being? The vegan would probably posit that they are equal.
Obviously yes, rats don’t deserve tumors.
Just as possible that allowing this consciousness to do exactly what it was designed to do and achieve such a sublime state of flow that it would be a crime to NOT let it experience Pong.
"Hundreds of thousands of human neurons growing in a dish"
It seems doubtful that hundreds of thousands of human neurons would lead to consciousness.
"Although the company calls its system DishBrain, the neurons are a far cry from an actual brain, Kagan says, and show no signs of consciousness."
...then again, would they be able to tell? There's not a lot of I/O in this situation.
I agree that low numbers of neurons are unlikely to lead to human-level consciousness. I have zero faith that researchers in this area won't continue to push the limits - perhaps even if they don't publish on doing so.
I see this research as a red flag "hey, we are approaching something very bad" and while the flag itself may not be a problem we should heed the warning and change direction.
All this said, my understanding is that this research is in part inspired by or associated with the phenomenon of people with encephalopathy who have greatly diminished brain volumes but are still conscious. This says to me we can't reliably predict how many neurons are necessary before creating consciousness.
Indeed. Although the lab couldn't check for signs of consciousness. If they'd been able to do so, they'd have cracked the "hard problem" of consciousness, which would be far bigger news. So this is blather at best, and ethics-washing at worst.
I think "being able to describe exhaustively the conditions under which something is conscious" is sometimes called the "pretty hard problem of consciousness", as opposed to the hard problem? (Also considered unsolved though)
Fair. I may have lost track of which problems take which titles :)
You can monitor brain activity. Hard to be conscious if there is no activity in the cells.
> The one substrate we can be sure of is capable of producing consciousness?
How can you be sure that anyone other than you is conscious? Why shouldn't animals be conscious?
> How can you be sure that anyone other than you is conscious?
That doesn't seem like a more or less difficult question than asking how I can be sure that human neurons are the substrate responsible for producing consciousness. If it turns out that the other humans I observe only seem like they're conscious, then it could also turn out that my neurons only seem to be the substrate producing my consciousness. Of course, this is just terrible epistemology.
Yeah, and I really don't see how that substrate should be much different from that in other animals such as, say, chimpanzees. It seems silly to criticize this research just based on the fact that the neurons were cultured from humans in my opinion.
human just can't accept human suffer. most of legal of us is about protecting human.
Yes imagine a life where you play pong for orgasms.
I hope you’re joking because that actually sounds like a terribly impoverished life for a human consciousness.
It seems needlessly provocative to use human neurons to do this. Presumably they could have had the same effects with neurons from another species. It’s like they want to provoke a backlash from religious people against scientific experimentation.
It's not needlessly provocative human neurons have shown in almost every study to be superior to all other animal neurons.
Let the backlash happen and as a society let's put a stop to letting religious idiocy get in the way of science.
There is an entire field of study for ethical science and I stand behind that 100%.
Unethical science can get lost!
I'm not a progress at any costs type of person.
But is this actually unethical or just something a religious idiot doesn't like?
Only religious idiots would prefer a "normal" life over having human-brained gorilla chimeras do all the work and just consuming psychotropics all day, right?
Oh captain my captain, lead me to this promised land!
Very few people are not brainwashed in one direction or another, there is no need to call them idiots.
FYI, pretty sure the parent of the comment you’re responding to is the one that’s actually calling them idiots.
Enough of talking about them, then :)
> human neurons have shown in almost every study to be superior
Until recently I thought a neuron is a neuron is a neuron. How are they different between species?
My understanding is that they don't just differ between species, but they differ substantially between different types within the same individual.
My understanding is extremely limited.
That being said, my impression is that some of the axiis they can vary on include things like : how they respond to some other chemicals ("neurotransmitters"? not sure if that's the right word) in their environment, whethether they produce/emit those other chemicals, the types of conditions under which they fire (e.g. from how much they "leak" activation over time, counteracting the signals they get sent from others), things like uh, I think(?) whether they tend to fire at a particular rate by default which can be increased or decreased by signals they receive(?) vs whether they by default do not fire unless they receive enough other signals,
I'm unsure of basically all the things I just listed, but my impression is that there are lots of ways they can differ, and they definitely aren't all the same.
I would like a source for the first claim. When I read the preprint for this paper half a year ago it was actually one of their claims that to the best of their knowledge their Pong experiment was the first to show human neurons being ‘better’.
Are there actually any religious people against this? In general I haven't heard of groups that are against culturing human cells
Well, seydor's comment that they actually used induced pluripotent cells derived from skin cells, rather than stem cells from fetuses, makes me feel better about it.
I'm of course not against, in general, "culturing human cells", but I have moral uncertainty/concern about those derived from fetuses, as well as any large quantities (in one group) of human neuron cells (or, also, but to a lesser degree, large quantities in one group of other kinds of nerve cells).
That's not to say that I would be confident that what was done would be wrong if they were using cells derived from fetuses, but, I would think it more likely that there would be a moral problem with it. (And this feeling/belief/concern is in part due to religious belief.)
(I can imagine someone saying "But seeing as induced pluripotent stem cells seem to behave in effectively the same ways as 'actual stem cells', how do you know that whatever moral problems you think would be present if using 'actual stem cells', don't also apply to the induced pluripotent cells?" And to that imaginary interlocutor, I must admit that thay have somewhat of a point, in that I suppose I can't entirely rule that out, but, my intuition, which unfortunately is most of what I have to go on because we have not been granted a book detailing precisely every last detail of the metaphysics of personhood etc. , suggests to me that it seems substantially less likely to be a problem.
Even if it might theoretically be possible to use induced pluripotent cells to form a fetus which could develop into a full child which would be a person,
(is this thought to be theoretically possible? I mean, outside of something as general purpose as arbitrarily rearranging the atoms, or using cells from an outside source, and just moving induced pluripotent cells around while treating them with various chemicals and nutrients, is it thought that in theory this could produce a viable fetus?)
it still feels reasonably unlikely that without actually starting to do that, and only converting what type of cells some skin cells are, that this would constitute creating something with any moral patienthood. One such reason being that, I would think, there wouldn't be a clear distinction of how many such entities would have been created. Would it be one moral patient per cell? That seems implausible, especially assuming that multiple such cells would need to be used together to create a viable fetus. One might point out then that identical twins can arise from one zygote by the group of cells that the zygote becomes splitting into two parts, and so if moral patienthood is incompatible with ambiguity-of-number, then this should also apply to blastocyst or whatever. And, perhaps? Though, in that case, there is still the clear differentiation of "these came from this zygote", so perhaps there could be some reason there.
Again, my position is one of uncertainty about these questions, and associated concern. My position is not that I know for certain that it is less morally problematic to use induced pluripotent stem cells than it is to use stem cells derived e.g. directly from a blastocyst, but that it seems substantially less likely to me that using induced pluripotent cells is a problem than it is that using stem cells from a blastocyst is a problem, though neither is certain.)
technically they used human skin cells, which they induced to be pluripotent, which then became neurons
They compared human cells to mouse cells, showing improved performance for the human cells.
So we can create microbrains of human neurons. Cool. But what about a gigabrain? Something smarter than all of us?
For the record I do think such a creation should have personhood. And have the right to learn about and interact with the real world.
I wonder if in the future we'll have some sort of FPGA formed by a neural dish. Hook up an HDMI port to a cube of neurons and it OCRs text or tracks objects. I suspect biological variations in cells would make the results inconsistent, and far inferior to what we can do with semiconductors presently, but it's cool to think about.
Here's an article that's almost 20 years old on a similar topic
https://web.archive.org/web/20041023144731/https://www.wired...
https://web.archive.org/web/20041106064802/http://www.napa.u...
EDIT: it seems the researcher working on this project passed away recently :(
Maybe they could run for senate in my home state
Hmm. Assuming we could feed in the sort of indicators and summary statistics that already form the way nation-states see reality, we could attach a literal brain to the organs of the state!
Depends on how quickly the neurons can learn to be incompetent and waste tax payer money, only then will they have a real chance ;)
Avenue 5 (under rated show) has President 2.0! An AI who is far more competent. Or more like able to make tough decisions without human backlash.
SF author (and marine biologist) Peter Watts described something very much like this, only more advanced, in his book Maelstrom (2001):
"Achilles Desjardins had always found smart gels a bit creepy. People thought of them as brains in boxes, but they weren't. They didn't have the parts. Forget about the neocortex or the cerebellum—these things had nothing. No hypothalamus, no pineal gland, no sheathing of mammal over reptile over fish. No instincts. No desires. Just a porridge of cultured neurons, really: four-digit IQs that didn't give a rat's ass whether they even lived or died. Somehow they learned through operant conditioning, although they lacked the capacity either to enjoy reward or suffer punishment. Their pathways formed and dissolved with all the colorless indifference of water shaping a river delta."
Much of his work, including Maelstrom, is freely available on his own website:https://rifters.com/real/shorts.htm
I thought you were going to say Starfish by Watts, which also features smart gels.
Malestrom is the second book in the trilogy (or quadrology - the 4th book was split in two in some printings).
I didn't have a copy of Starfish handy, and I wasn't sure if gels had been mentioned there or not.
Another great novel about emergent intelligence was Blood Music by Greg Bear. A microorganism develops swarm intelligence.
I tried to reply to someone but he got censored for pointing out that if you care about two neurons you should care about the animals that go through hell in the meat/dairy industry. Here's my reply:
One can't hope to truly know if another individual (be it a human or any other animal) is conscious (the hard problem of consciousness). But if it has eyes like us, mouth like us, plays like us, cries/screams when hurt like us, and even seem to dream like us (asleep dogs sometimes move like they are running in their dreams, and when they wake up they look confused and still perturbed from what they were seeing in their minds), it makes perfect sense to assume that they are conscious like us, and it's the ethical thing to do.
If you apply the same standard within our own species you would be accused of xenophobia. Why should it be different across species?
if all species are same, what's different between vegetables and animals? or even COVID-19 virus and bacteria?
DishBrain is better for marketing than DishNightmares.
It's interesting to see research like this coming up. It might inform AI in unpredicted ways IMO.
"In vitro neurons learn and exhibit sentience when embodied in a simulated game-world".
https://www.sciencedirect.com/science/article/pii/S089662732...
Wonder then they can plug in a camera and output the visual from the brain from recall. And would that be called consciousness?
Do we have an open source PetriNeuronSDK to communicate with these yet? I can't wait until we get cloud neurons IaaS.
Makes me wonder how many states there are in a neuron.
Like, they are similar to transistors but have more states than off (0) and on (1).
They also grew eye like structures too right.
hell is real
Next step: run Doom
From the website of the lab doing this https://corticallabs.com
Human neural networks raised in a simulation
The neurons exist inside our Biological Intelligence Operating System (biOS). biOS runs the simulation and sends information about their environment, with positive or negative feedback. It interfaces with the neurons directly. As they react, their impulses affect their digital world.
Our first minds
The dishbrain is currently being developed at the CL0 laboratory in Melbourne, AU. We bring these neurons to life, and integrate them into The biOS with a mixture of hard silicon and soft tissue. Our first cohort have learnt to play Pong. They grow, adapt and learn as we do.
Silicon meets neuron
Neurons are cultivated inside a nutrient rich solution, supplying them everything they need to be happy and healthy. Their physical growth is across a silicon chip, which has a set of pins that send electrical impulses into the neural structure, and receive impulses back in return.
The Ultimate Learning Machine
Those actions have a positive or negative effect in biOS, which the mind perceives, adapting to improve that feedback. The human neuron is self programming, infinitely flexible, the result of four billion years of evolution. What digital models try and emulate, we begin with.
Why?
There are many advantages to organic-digital intelligence. Lower power costs, more intuition, insight and creativity in our intelligences. But most importantly we are driven by three core questions.
What will we discover if our intelligences train themselves?
We know an organic mind is a better learner than any digital model. It can switch tasks easily, and bring learnings from one task to another. But more important is what we don’t know. What are the limits of a mind connected to infinity? What can it do with data it literally lives in?
What happens if we take a shortcut to generalised intelligence?
Machine Learning algorithms are a poor copy of the way an organic neural network functions. So we’re starting with the neuron, replacing decades of algorithms with millions of years of evolution. What happens as these native intelligences start solving the problems we’d previously left to software?
How can we surpass the limits of silicon?
Silicon is raw, rigid, unchanging. Our organic neural networks sit on top of this raw power, but the way they grow and evolve isn’t limited to the software they run on. There is no software, it's coded in their DNA. How will computing change as we shift from hard silicon to soft tissue?
RFN: Request For Neurons
The dishbrain is learning and growing in biOS today, and soon we’re opening an early access preview for selected developers. The biOS is our simulation environment, where you can program tasks, challenges and objectives for our minds. Join our developer program to get early access to our SDK, and secure training time with our minds.
Proper ai.
N.I.
If there are extraterrestrials, "Neurons in a dish learn to play Pong" is probably the sort of thing they say while watching us from their UFOs.
I tend to think that if there are extraterrestrials, they probably look a lot more like neurons in a dish than like us.