Settings

Theme

Ask HN: Can tokenizers go from fixed length tokens to varying length?

3 points by jinen83 a year ago · 4 comments · 1 min read


I am going through a workshop on building LLM grounds up. While I am studying tokenizers like BPE - I was curious why not use ideas from encoding techniques like huffman and make better optimized encoders. So a token like '.' also has same sie token as the word 'Algorithm'. Their frequency of occurence and their size could save us some GPUs?

PaulHoule a year ago

The token number is the index for an embedding. If there are 50,000 distinct tokens then there are 50,000 different embedding that could be presented to the input of the neural networks. I suppose you could compress a list of tokens with Huffman or gzip or some similar algorithm but so far as the neural network one token is one slot of input to the network so you wouldn’t save anything once you got into the network.

pizza a year ago

In a way, some things like this are possible/have been explored.

In Huffman coding, the code length is log2(length of path in binary tree to x) ~ log2(1/p(x)).

BPE works by combining the most frequent pair of adjacent tokens into a new token. So generally speaking, the most frequent token end up being earliest in the vocab, having a lower index. The number of bits to encode that index is log2(i(x)). In theory if you could have variable width int indices, then shorter indices would correspond with the more frequent tokens, so log2(i(x)) would correlate somewhat with log2(1/p(x)) and you would end up getting some compression.

Finally, it's also worth mentioning that somebody did try doing an LLM over entropy compressed tokens - there were some compute efficiency gains, but it didn't really help language modeling:

In this paper, we explore the idea of training large language models (LLMs) over highly compressed text. While standard subword tokenizers compress text by a small factor, neural text compressors can achieve much higher rates of compression. If it were possible to train LLMs directly over neurally compressed text, this would confer advantages in training and serving efficiency, as well as easier handling of long text spans. The main obstacle to this goal is that strong compression tends to produce opaque outputs that are not well-suited for learning. In particular, we find that text naïvely compressed via Arithmetic Coding is not readily learnable by LLMs. To overcome this, we propose Equal-Info Windows, a novel compression technique whereby text is segmented into blocks that each compress to the same bit length. Using this method, we demonstrate effective learning over neurally compressed text that improves with scale, and outperforms byte-level baselines by a wide margin on perplexity and inference speed benchmarks. While our method delivers worse perplexity than subword tokenizers for models trained with the same parameter count, it has the benefit of shorter sequence lengths. Shorter sequence lengths require fewer autoregressive generation steps, and reduce latency. Finally, we provide extensive analysis of the properties that contribute to learnability, and offer concrete suggestions for how to further improve the performance of high-compression tokenizers.

https://arxiv.org/abs/2404.03626

  • PaulHoule a year ago

    To tease out the issues: I see the BPE algorithm as similar to Huffman coding in the way it builds the token list by creating, at any point, a token that is optimal in "compressing" the data in that it represents a commonly occuring sequence.

    As BPE is going to create tokens for frequent words it is going to gain over just representing over single-byte tokens like "a" or 0xb7 however you get a loss in that "a" is getting puffed up to two bytes or more.

    In an ordinary implementation of BPE I think the order of the token ids is not relevant so depending on how it was coded up I wouldn't depend on "lower token # = less frequent" but you don't need to leave it to chance and you could just sort the tokens by frequency. Then if you wanted to compress the token sequence you could use something like

    https://en.wikipedia.org/wiki/Golomb_coding

    or the algorithm used in UTF-8 encoding to use fewer bits for the more common tokens. This doesn't affect the neural network part but it would let you encode the token sequence in fewer bits than it would take otherwise. I'm not sure how it competes with more commonly used compression algorithms. On the other hand if you want to use the whole neural network to compress you can do way better

    https://arxiv.org/abs/2306.04050

    Either using the tokens directly or the LLM you have the disadvantage that the decompressor needs a huge data structure which has been trained on a lot of text, but boy do the people in that last paper get a result way better than has been reported so far.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection