GPT-4's Has Been Revealed
thealgorithmicbridge.substack.comThe title is missing a word, it should be: "GPT-4's Secret Has Been Revealed"
To summarize,
the author took what George Hotz said (8x200b models) and put it together in an article that talks absolute nonsense for the entirety of its 1,207 words.
I almost stopped reading after reading Hotz was a serious source
Glad I'm not the only one with this take. Assuming it's even true, orchestrating the interactions of 8x200b models seems like a fairly significant advancement in itself.
My feeling is that the next generation of AI is going to be modular: connecting disparate models serving different purposes to generate a processing pipeline to achieve a singular goal. This, and the new function calling API seem to indicate this is an area of interest.
i’d like to understand what’s different between the 8 200b models
what are their categories/specialties?
Has any work been undertaken to classify training data into groups then classify prompts against the same criterion and send that prompt to the corresponding domain specific model?