Settings

Theme

SlimPajama: A 627B token cleaned and deduplicated version of RedPajama

cerebras.net

60 points by andyk 3 years ago · 7 comments

Reader

ftxbro 3 years ago

They better not have removed the good stuff, like the full texts of the subreddit dedicated to counting to a million, the logs of so many hashed numbers from various cryptos, and the tables of datamined stats from like every console game.

  • Nextgrid 3 years ago

    > like the full texts of the subreddit dedicated to counting to a million

    This was the source of the "anomalous tokens" phenomenon where the usernames of prolific counters was yielding weird and unexpected behavior on the OpenAI models.

    While definitely an interesting scientific curiosity, is there a reason you'd actually want this in a production model?

    • fjkdlsjflkds 3 years ago

      This is not entirely correct, from what I understand. The source of the "anomalous token phenomenon" is that those texts were included while training the tokenizer but not included when training the models. It is not clear they would necessarily induce the same effect otherwise (i.e., if both the tokenizer and LLMs were trained with those "counting" texts).

      EDIT: notice that the "tokens" that trigger the "glitch" are not the numbers themselves but the usernames of the people counting on that subreddit (which appear nowhere in the training dataset, due to a cleaning step that removed the "counting" texts)

    • orra 3 years ago

      > is there a reason you'd actually want this in a production model?

      I think GP agrees with you, and they were being sarcastic to be funny. It's not always easy to tell in a text based medium.

wskish 3 years ago

Do they mention anywhere the definition of "low quality" data or the proportion of removed data that was low quality versus duplicate?

They mention "When upsampled, we expect SlimPajama to perform equal to or better than RedPajama-1T when training at trillion token scale." But i guess "upsampling" in this case is just explicit duplication of the training data. So the only potential gains would be from the removal of the low quality data?

  • yokaze 3 years ago

    > After removing punctuation, space symbols, newlines and tabs, we filtered out documents with less than 200 characters. These documents typically contain only meta data and no useful information.

    > But i guess "upsampling" in this case is just explicit duplication of the training data.

    Possibly, but duplication means weighing and that is important in unbalanced trainingsets and improves the results in practice.

Tostino 3 years ago

I'm interested in seeing this scaled up to larger parameter size models (30b+ parameters), and the dataset expanded with more high-quality data (scientific papers, more books, more code, etc).

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection