The I in LLM stands for intelligence
daniel.haxx.seI wonder if we'll eventually realize, similar to Solow's productivity paradox, that whatever efficiency "gains" we get from AI are just cancelled out by the increased need for fact-checking, some increased incidence of major errors being blindly trusted, and suboptimal outcomes (e.g. being bamboozled by good copy or fake reviews into paying for an inferior product). All this in addition to the opportunity cost of the brainpower and energy currently being poured into multiple largely-comparable models.
The beautiful thing about our late-stage world, is that the appropriate resources will be spent on testing and validation for those who can afford it, and literally every other creature on the planet will have to bear the externalities.
Truly a perfect system.
Better yet: someday, rich people will get to afford the high-quality LLMs that hallucinate somewhat less often, while plebs will have to make do with lower quality LLMs that hallucinate worse than someone on an acid trip.
I'm pretty sure I've seen that in some Sci-Fi except replace "LLM" with "mods"
> On Hackerone there is no explicit “ban the reporter from further communication with our project” functionality.
That seems to be something hackerone need to fix asap
Editor updated the article with a note saying it does exist, just wasn't as easy to find.
highly related (and regrettably currently higher on the front-page than Daniel's blog link itself): https://news.ycombinator.com/item?id=38845878
Or, more aptly put, AI stands for Absent Intelligence.