Ask HN: Why are we still calling LLMs LLMs?
I was just thinking with them being trained on Text, Images, Audios, Videos ... shouldn't they be called LIMs - Large Information Models? Because the language is the point. They are not trained to generate, edit or manipulate images, audio and video. They are trained to generate summaries or to create prompts for other models that generate, edit, and manipulate other media. Unless, of course I am horribly behind the times and they are stuffing all kind of other tasks in the LLM themselves. They all usually share the ability to answer via text. That's why we still call them LLMs! But they by now model information in all its way. Not only language. For me saying LLMs is like using the floppy disk as a save icon. Or LMM - Large Multimodal Model?