Ask HN: Are LLM‘s capable of generating their own source data?
LLMs will generate data that already fits the model. You can't generate information out of thin air. But you can use a larger model or one with more training data to generate inputs for another model.
For image based networks, it's pretty common to increase the amount of training data by cropping, rotating, and adding noise to pictures and feeding them in multiple times. The larger the network, the less useful this is. It will quickly start to overfit and memorize the content of inputs that it sees multiple times. And I'm not aware of anyone doing something like that for large language models.
> LLMs will generate data that already fits the model. You can't generate information out of thin air. But you can use a larger model or one with more training data to generate inputs for another model.
I suppose you could use an LLM that's too large and slow for production to generate training data. But even that seems dangerous. Especially since LLM-generated data will almost certainly creep into the training set as time goes on. Despite likely efforts to keep it out.
Is generated data commonly used in LLM training?