Attention Sinks in LLMs for endless fluency
huggingface.coVarious experiments on the recent Window Attention with Attention Sinks/StreamingLLM approach indicate that the approach certainly improves inference fluency of pretrained LLMs, while also improving the VRAM usage from linear to constant.
It can be applied to pretrained LLMs with little to no additional effort, and Hugging Face transformers is working on first-party support. Until then, the third-party module in the blogpost already works well.
Appears that this is an open source implementation of the same "Efficient Streaming Language Models with Attention Sinks" paper from MIT, linked here 7 days ago. Published on Sept 29, 2023.
That is exactly correct
The paper published by Xiao et al. (2023)[0] states that "a surprisingly large amount of attention score is allocated to the initial tokens, irrespective of their relevance to the language modeling task" (p. 2). Does that mean that task prefixes used for LLM generation (e.g. "translate: [sentence]") are actually attention sinks? Or are they not? I don't really understand what they mean by "irrespective of their relevance to the language modeling task."
By "irrespective of their relevance to the language modeling task", the authors mean that the semantic meaning of the tokens is not important. These 4 tokens can be completely replaced by newlines (i.e. tokens with no semantic meaning), and the perplexity as measured on a book of 65k tokens is nearly unaffected.
The clue is really that these tokens are just used to "offload" attention scores - their semantic meaning is irrelevant.
So, llama.cpp already somewhat supports this: https://github.com/ggerganov/llama.cpp/issues/3440