A deep learning framework for neuroscience
nature.comI see a lot of criticism here saying things like "DNNs have nothing to to with brains, they weren't designed to work like brains, and any resemblance is surely just an artifact of training them to do brain-like things."
The fact is, there have been neuroscientists working with neural network models with greater and lesser complexity than DNNs for decades. They've been utilized to great profit outside of neuroscience lately, but that doesn't make them not an abstraction of some aspects of cortical computation.
We don't quite understand how brains could perform or approximate backprop yet, but it's the only training algorithm that has been remotely successful at training networks deep enough to do human-like visual recognition. So many people take that as a big clue as to what we should be looking for in the brain to explain its great performance and ability to learn, rather than a reason to disqualify DNNs entirely.
There's plenty of modeling work going on with more traditional biophysical models, such as those that include spiking, interneuron compartments, attractor dynamics, etc. This is just an attempt to also come at the problem from the other direction, starting from something that we know works well (for vision) and trying to figure out how to ground it in biophysical reality.
I don't think anyone is trying to disqualify DNNs. I think the difference might be an abstraction for a neuron vs an abstraction for the brain. Success or value doesn't necessarily equate to "human-like." The paper seems naive to, or ignore, prominent, long-running/standing related research that provides a stronger foundation and as far as I can tell includes what they propose. So, at least for me, I'm not sure what the contribution is.
function optimization in deep learning sense has nothing to do with neuroscience, I hope they don't think of fitting this model to brain processes just because it's popular
In a range of domains, in particular higher level brain areas, DL models trained on imagine are already the best predictive models of brain function. If they are better than all other models at describing the data, why would we say they have nothing to do with neuroscience?
As far as I know there is no evidence that the brain has any analogue to the back-propagation used to train pretty much all modern neural networks. Back-propagation is a good way to optimize neural networks, but it doesn't seem to be the way brains optimize neural networks.
well, the original paper has a pretty good summary of how the brain may actually do backdrop.
What do you mean by "original paper" here?
DL models are also the best way to predict the behavior of three-body systems in physics. Would you say DL models tell us something about physics?
You're talking about the output of a deep network predicting the solution to a problem it was trained on. They're talking about something completely different: the properties of the whole network (opening up the "black box") correlating with/predicting properties of brain regions while they perform similar tasks.
because any set of math functions might do really well at predicting within a certain domain, and then produce noise or worse with new cases outside of the trained area.. perhaps more importantly, from a psychological point of view, a substitution error by humans, of replacing one not-understood system (mind) with another (black box training via NNs) is common and may be incentivized, too
I find it disappointing that the paper makes no mention of Numenta, TBToI or HTM. How is what they are proposing not already included in Numenta's work (informally, of course)? Plus, Numenta's work seems to go much further confronting biological plausibility head-on.
I wish there was more content about that on HN in general.
The book On Intelligence by Jeff Hawkins was a fantastic read on HTM and similar concepts. (https://amzn.to/2JyQDF3)
Anyone up for a summary? I didn't get much from the abstract.
any article containing the word "framework" right in the title is either philosophical mumbo-jumbo, or an incomplete documentation for an over-engineered first project of a recent CS graduate.
they're arguing that articifial neural nets are useful models of brain function and anatomy. a lot of people in the field of neuroscience strongly disagree, hence their attempt to outline the utility of ANNs.
I also tend to be skeptical that ANNs are very useful as a model for brain function. In vivo neural networks are so complex and so dynamic when compared ANNs.
In my opinion, the fact that even such a massively simplified model of one specific subtype neural processing has been able to give as powerful results as we have seen from Deep Learning should give us an appreciation for how much there still is for us to learn about this staggeringly complex system.
I would guess that the next great advancements will come from using better understandings of the brain to build better ANNs, not the other way around.
Learning's likely to be bidirectional. ANN (as a mathematical analogue) is independent to the biological function (the original and key inspiration). Advances in network architecture (e.g. the recent trend towards skip connections and parallel processes) is likely to give insight to how an underlying, more complex system is likely to operate. In particular, systematic errors made by ANNs under given frameworks have a tendency for existing in some form in psychology and biology. Since conceptual thinking from both domains can directly feed towards each other, it's a rare bootstrap moment with the potential for rapid advances in both directions.
> Advances in network architecture (e.g. the recent trend towards skip connections and parallel processes) is likely to give insight to how an underlying, more complex system is likely to operate.
Maybe. The thing about these advances in ANNs is, so we have any reason to believe they have anything to do with the way biological neural networks work? It might be the case that these kinds of advances correlate to a more accurate understanding of how our brains process information, or it might also be the case that these are just optimizations on a mathematical model which is fundamentally different to biological intelligence.
To me advances in the other direction are much more compelling. We actually know quite a lot about how biological neural networks work. The way that electrical and chemical signals are transmitted is quite well understood, and can be accurately modeled through mathematical models derived from physics and physical chemistry. At the moment, the problem seems more to be more about how to accurately model this system at scale which we already have tons of data on.
It's not that I think these innovations in ANNS have no value, it's just that it seems that ANNs are quite tangential to neuroscience.
From studying ANNs, I've reconceived of how I view myself from a programmatic perspective. I have used the resulting models to change myself in useful ways. If they're not perfectly accurate, they may still be accurate enough to be useful.
The trick, to me, is to avoid falling into the trap of thinking imperfect models aren't useful. Then the accuracy matters less.
An example of a useful intuition was realizing choosing to believe something is a skill and I can choose to believe the opposite of anxious thoughts to safely defuse anxiety as long as I'm meeting my needs.
I know people who've been in therapy for a long time before learning that one, so I'm gonna keep using ANNs as a guide for self-hacking. It's way too useful to me.
It's fine and good that ANNs can serve as a metaphor for your own mind. That's something very different than saying they're going to be useful in unraveling the scientific mysteries of the brain.
No, I don't think so. In the abstract the authors propose that ANNs may be useful to model the brain in three ways: 1) objective functions (brain-mediated physiological outcomes), 2) learning rules (how patterns are recognized and knowledge is gained), and 3) 'architectures' (which I assume to mean patterns of neural wiring and how info flows).
They are NOT proposing that neural nets architectures are analogous to brain architectures. I'm guessing their focus is on the lowest level activities in simple brains, perhaps afferent/efferent sensory perception, metabolic and physiological regulatory control -- the kind of things instrumented in worms like C. Elegans.
> I'm guessing their focus is on the lowest level activities in simple brains, perhaps afferent/efferent sensory perception, metabolic and physiological regulatory control -- the kind of things instrumented in worms like C. Elegans.
could be but then most of the authors of the paper work in human cognition so I'm slightly skepical that their focus is going to be at the level of cellular biology
Yeah, it's be nice if they offered an illustration of how they'd approach an amenable problem.
Blake Richards, the first author, is active on twitter and posted a quick overview of his argument here: https://twitter.com/tyrell_turing/status/1188868918250745863
My 8 month old daughter suffers from cortical visual impairment after contracting bacterial meningitis caused by e coli during the birthing process. She had to have a bilateral cranitomy to have isolated areas of the infection carved out of her brain tissue.
Looking at this article, i wonder if we'll ever be able to figure any of this out. I feel pretty hopeless about the entire situation.
I don't think we need to have a full understanding of the brain to make progress on those fronts. If you look at neuralink (https://www.youtube.com/watch?v=r-vbh3t7WVI) there is some pretty amazing brain computer interfacing that is already happening. If we assume Moore's law hold up with neural sensor resolution, then within 10-15 years there will be as many input sensors as there are cones in the human eye. There are also plenty of other already existing technologies that can help aid in making her life easier.
I'm not saying "don't worry about it" as an uncle with two nieces I know how difficult it would be just to ignore something. You want your family and dependents to have happy full lives and every little struggle makes you worry and think. But what I am saying is, have a little hope. When you see articles like this, they are normally talking about theoretical and philosophical meanings of what intelligence and consciousness is, there is plenty of solid practical and applied science and progress that relates to your daughter's situation.
Don't get yourself worked up about the ponderings of math and computer nerds. The real life changing stuff is not being done in AI right now, its being done in universities, hospitals, and laboratories by scientists, doctors, and professors.
I feel your pain. But please, do not lose hope! The good news is there is now exponential progress is neuroscience and brain-machine interfaces. And above all, try focus on what your child does have or can have. Our weaknesses sometimes turn out to be strengths.
I'm sorry to hear that, sounds like an impossible situation. Sending good vibes your way.
I'm so sorry to hear that :( Hang in there.
many visually impaired people have long and happy lives ! best to you and your family at this difficult time
Seek Jesus. Miracles happen.
This might be a little late, but it may be helpful:
https://singularityhub.com/2019/10/03/deep-learning-networks...
I'm not sure, but I think it's trying to say that deep learning as it stands is modeled on one aspect of a model of the brain, by developing out the 3 aspects they identify and having them act in unison would be potentially a good thing, disclaimer, I am neither a neuroscientist nor deep learning expert!
What I understood is that they're saying deep learning relies on understanding neural processing in 3 parts: objective functions (activation functions maybe?), learning rules (I guess like back-prop/gradient descent?) and architecture (I assume network structure)?
So it sounds like they want to use this componentization of neural processing to try to understand biological neural networks better.
The objective function is the entire function the network is training toward, i.e. in a classification task it's the correct mapping of images to labels. The idea here is that real brains also optimize their weights to compute certain useful objective functions.