Settings

Theme

Short-term Hebbian learning can implement transformer-like attention

journals.plos.org

47 points by tester89 2 years ago · 4 comments

Reader

light_hue_1 2 years ago

Overpromises and under delivers.

> Hebbian learning, can implement a transformer-like attention computation if the synaptic weight changes are large and rapidly induced

Well this has no analog in the brain either.

In any case. How is this transformer like attention? Not all attention mechanisms lead to a transformer. Certainly the two are not synonymous.

> While it is tempting to assume that cortical rhythms might play a role, we have found that match windows around 0.5—2 s are necessary for reducing noise in the spike train comparisons, a timescale much longer than the cycles of theta, beta or gamma rhythms found in the cortex.

Well that's totally the wrong timescale.

There's a glut of these "abuse some random feature of the brain that already serves a different purpose to implement something that clearly doesn't work but is vaguely reminiscent of something that happens in machine learning so we'll call the two the same". They contribute nothing.

The few somewhat worthwhile actually show a working network for a real task. The real breakthrough will be a paper that actually works for real, on real data, and can be implemented in the brain; we've got nothing like that. This isn't one of the good ones.

  • JPLeRouzic 2 years ago

    I guess this paper looks very interesting for someone in neuroscience as we know little about how the brain works. For a layman like me, it's funny that we went from very imperfect observations on wet neurons to variations on computerized neural networks culminating in Transformers, and then here we change again of perspective by trying to understand wet biology from computer science.

adamnemecek 2 years ago

This is not surprising considering the paper "Hopfield Networks is All You Need" https://arxiv.org/abs/2008.02217. Hopfield networks learn via Hebbian learning.

  • chewxy 2 years ago

    The paper is about implementing transformer-like attention using simulated human neurones

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection