AI Personalization Fuels Groupthink and Uniformity
markozivanovic.comThis really is completely independent of "AI". I remember seeing a famous presentation (maybe it was a TED talk?) about the "echo chamber" effect of personalization from the mid 00s - and IMO nearly all of the negative impacts discussed in that talk came to pass and then some. AI just makes it worse.
Edit: I think it was this I was referring to about "filter bubbles", https://youtu.be/B8ofWFx525s?si=rK1T-v5D0sAeiHJe . I was a tad mistaken, this was from 2011.
This seems fundamentally different. Filter bubbles show you more of the externally generated content you engage with. These personalizations are trying to predict the content you generate.
While it may serve as a ballast for your personal voice changing over time, the whole point is to learn you not to feed you.
I worry about this. I happen to have done quite a lot of programming in Lisp dialects over the last decade or so but since adopting Gpt4 I tend to just code in Python because that is what the model understands best. It does seem like AI will enhance network effects by increasing the efficiency difference between technologies that the AI knows and those that it doesn't. Kind of depressing.
I wonder how much of this is syntactic familiarity (from training) and how much of this is needing to attend to balanced parentheses.
I don't use lisp often enough to have played with getting GPT to lisp with me, but I have played a bit with getting it to read and write Datalog (which I suspect is even more scarce in The Pile dumps). It's ok at recognition but misses details. I haven't seen it produce much of value yet. But it can write JavaScript for days, and has no problem balancing parentheses and brackets there, even without compiler/tree-sitter support.
If I had spare experimenting bandwidth I would look into whether fine-tuning for Lisp format and conventions would show a significant boost in performance..
I find its Lisp generation to not just be syntactically wrong but conceptually inadequate. It's inability to generalize across languages is one of the big reasons I'm skeptical about language models' general intelligence.
This seems more of a concern for foundational models rather than personalization.
Any pressure you feel to adopt python is not because it has detected you enjoy python, it’s because it’s global training data skewed to python.
Its a huge concern but, not this article’s concern I think.
What I’m even more interested in, what about the time when people don’t open Google, just ask for a product.
Some companies are 10^7x, while 10^7 others are said goodbye?