Towards understanding multiple attention sinks in LLMs
github.comThis project reveals an interesting phenomena, where LLM converts semantic non-informative tokens to attention sinks through middle layer MLP.
The converted sinks are termed secondary attention sinks as they are weaker then BOS attention sinks.
This might be related to layer specialisation in LLM!
The up to date paper documenting and analysing the observation is now available on arxiv!