Researchers make a “diffusive” memristor that emulates how a real synapse work
nanotechweb.orgMemristors are fascinating. Here's my question: How much do biologists understand about how synaptic systems work in living organisms? In other words, is this more likely to be helpful to people studying how memories and thoughts are created and retained in living organisms, or is this more likely to be helpful to people studying how to make artificial systems behave more like organic systems?
Things like memories are not well understood from what I've gathered as a Master's Student in cognitive science.
I mean, it is thought to be stored in the form of persistent patterns in neurons and we know certain areas of the brain that are vital for the formation of memories. But there is no clear model of how memories are formed.
There has been rapid increase in understanding of such things lately and I would say neural networks give some insight and ways to explore and experiment with different configurations further.
A physical model like this could probably serve as a proof of concept and maybe help further knowledge in the are, but I suspect computerized simulations will serve this purpose better.
tldr; as much of a layman I believe we know too little about the brain to make a proper physical brain.
Overall, you are right in your diagnosis - we simply have not located a single mechanism responsible for memory. In reality, it is probably a collective effect due to a huge maelstrom of chemical computing. Our brain is one of the most imposing dynamical systems ever studied! That's why the mathematics and in turn modeling of it is so beastly.
Right now there are broadly speaking two camps in neuroscience, those who are connectionists and believe that memories are overall rather non-volatile/physically localized, with an MIT group that showed particular (individual ) neurons linked to particular memories, at the extreme of that school. On the other hand, you have neuroscientists who subscribe to a more plastic conception- in which mass synchronizations and redundant information is consistently combined and recombined.
My take is that it is no contradiction to admit the our brain is clearly volatile, and non-volatile in different modes. The transition from STP (short term memory) to LTP (long term memory) probably involves different conformational changes in neurons, dendrites, who knows even on the epigentic scale. But we do know the volatility is important and interesting-- which is precisely what makes this diffusive memristive study so important!
I saw pictures labeled extracted from live brain scans. How that was done I don't know, maybe alpha or beta waves. At least that's short term memory.
Brain scans are not as accurate as one might think. Most or all methods merely show activation of neurons or groups of neurons over a timescale.
Typically you give a person, say a memory task, you look at which areas fire up as he tries to memorise a word list and from that you extrapolate whatever you can from it. I also believe working memory and short term memory are to some extent better understood than long term memory. actually the least known process regarding memory I think is how memories pass from working memory or short term memory into long term memory.
I didn't express clearly. They scanned the brain and recreated some part of what the person sees.
1. http://news.berkeley.edu/2011/09/22/brain-movies/
2. http://gallantlab.org/_downloads/2011a.Nishimoto.etal.pdf
Spoilers: It is fMRI, indeed.
I'm not sure if the scan is inaccurate or the representation within the brain, but I suppose it had to be more accurate than just a fuzzy area.
I was surprised today that the entry on Wikipedia still qualifies the memristor as "hypothetical", and says that "there are...some serious doubts as to whether the memristor can actually exist in physical reality".
I thought maybe research was farther along than that.
In truth it is. To practitioners in the field building circuits or doing simulations with nanodevices, this is somewhat of a tiresome debate.
To give some background, Leon Chua made certain claims about a hypothetical fourth circuit element and these debates largely stem back to claims about circuit analysis and mathematics. Basically his models predict a perfect device which, to my knowledge , has not been experimentally realized (to the contrary of HP's claims).
However , the funny thing is it doesn't really matter. We don't need a perfect memristor to build interesting and useful nanoionic and nano-redox circuits performing non-linear computational tasks. As modelers though, we do need to be careful making ideal claims about eternal non-volatility and device life (of course). Many point out that to the contrary of being ideal, these devices are extremely variable and imperfect- which is true. Anything built using nanofab techniques at the academic level (excluding semi-con industrial processes) will be..
Btw, If you want more physics depth on this , I can recommend any paper or book by Waser. They are all good. http://eu.wiley.com/WileyCDA/WileyTitle/productCd-3527334173...
Edit: adding a link to the first book chapter of aforementioned book I found which is already rather good. https://application.wiley-vch.de/books/sample/3527334173_c01...
I was confused by that as well. I was under the impression HP made working memristors years ago.
It is an interesting result but until it is connected with a system that can implement the other parts of learning we're left with a model of a neuron. Back in the 90's when neural networks were the big thing the first time people built what they considered to be very accurate neuron models connected together into networks as a way of building a system.
While this gives you a way to do that in hardware, and so potentially much faster and denser than the software systems, the missing bit is the system when connects these things together and feeds them inputs and pulls off outputs such that the system can be trained. Still looking for that paper.
Could a network be trained in software on powerful expensive hardware and then programmed onto some kind of neural FPGA that uses these memresistors to be used in power/space constrained systems?
Short answer: yes. Basically there are two options for future ReRAM (memristive) learning system: in-situ/on-chip learning , in which all learning rules are locally derived and enforced, and ex-situ learning systems in which we do what you suggested- import weights from more computationally/power expensive substrates. there is probably abundant promise in both approaches moving forward. I recommend looking at some recent papers by the Strukov group [1] as well as my own [2] to see the limitations of these approaches. Strukov paper skirts around the issue to a certain degree but they admit in supplementary material scaling issues are not favorable with their approach. our work takes the 'neural FPGA' approach quite literally. But, their approach may , with some improvements , do rather well for an on-chip backprop implementation. Let's see what they do next. Lastly, as far as hybrid approaches, there is a recent IBM paper which is really nice which talks about deep neural net acceleration with ReRAM. If you're really curious let me know and I"ll try to dig it up. [1] http://www.nature.com/nature/journal/v521/n7550/abs/nature14... [2]http://www.nature.com/articles/srep31932
For a simple multi-layer feed-forward network, couldn't you just just use "classical" components - seeing as the parameters of the network would never change (having been trained beforehand)? I.e. Would you actually need memristors?
I would love to hear more about your research, please try to dig up the links.
Glad to hear you find it exciting. I do too.. its a really hot field at the moment and I mean that in the good, not bad way ;) Lots of groups working in parallel on somewhat orthogonal design and architecture issues, with a variety of different considered devices, but a common basis set is emerging ;)
So, here's the paper I mentioned above. I think this is very methodical and inventive and definitely one of the best yet at considering confluence of DNNs and memristive (ReRAM) devices. A quick search revealed this was already on HN. https://arxiv.org/abs/1603.07341
So, I already mentioned the iconic Strukov paper above and my own which is really quite similar to Strukov in learning strategy/philosophy, except for we used entirely chemical and 'slow' devices , which may be quite interesting for brain emulation. (remember the brain operates in the mS , or microsecond regime and not nanosecond).
Here's another article I just stumbled upon a few days ago but which looks quite promising and brings us into the territory of a more un-supervised /probabilistic algorithm for learning. http://www.nature.com/articles/ncomms12611
What do you think about "chip in the loop" approach for training, which was popular in the 90s for hardware NN implementations?