Settings

Theme

How quickly will AI improve?

jasonhanley.com

8 points by jasonhanley a year ago · 14 comments

Reader

grajaganDev a year ago

There is no reason to think that LLMs will improve at all. They may degrade to due lack of clean training data.

  • joshka a year ago

    The only reason to believe that statement would be that training data is finite and cannot be meaningfully synthetically generated in a way that is useful to the LLM model.

    If you can agree that there are certain things which can be qualitatively measured by deterministic logic (e.g. "does this build", "what is the cyclomatic complexity of this", "does this pass the unit tests", "what is the performance characteristic of this", "can this be proven to be susceptible to a XSS bug", ...), and you can see that there are ways to use this information for feedback into the models, then there's no reason to think that the available training data is finite and limited by unclean generated data.

    There's several missing steps in that logic that would be difficult to (linguistically) prove with certainty, but I'm reasonably sure that your statement is false.

    • nprateem a year ago

      He said "may". There are plenty of other reasons to agree with the statement.

      • joshka a year ago

        The core of my argument was against "There is no reason to think that LLMs will improve at all." This is a falsifiable statement if there is one or more reasons to think that LLMs will improve. I provided one such reason. I don't disagree with the part of the statement that "They may degrade to due lack of clean training data." as instances of this have already, and will in future happen. However this is immaterial to the totality of the statement.

        Your argument is that there are some reasons to agree with the statement. To show that my argument is false you actually need to show that there are no reasons to disagree with the statement. In effect you're attempting to argue that because you saw some red cars means that another person's statement that all cars are red is true.

        Meta argument aside, there are many other reasons to suggest that LLMs will continue to improve, the easiest of which is they have done so recently so far.

  • cadamsdotcom a year ago

    Synthetic data is doing wonders for models like Phi-4, and at least part of the dataset for DeepSeek-R1 came from their earlier models.

    If you read the literature from the Phi-4 team it talks about synthetic data allowing better control over the training process. The upfront investment is greater but pays off over multiple generations of trained models - and doesn’t leave you with SolidGoldMagikarp ;)

    https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldm...

  • jasonhanleyOP a year ago

    As far as I can tell, the training is still going pretty well.

    Once humans learn enough, they are able to start coming up with and evaluating their own ideas.

    This ability isn't 100% apparent with current public AI models, but I strongly suspect that this is happening behind the scenes.

    Certainly researchers are already using AI extensively to improve AI, and that really has the potential to go exponential.

    • nprateem a year ago

      I just don't buy this at all. I've had no end of circular conversations with AI models because they fundamentally don't understand anything.

      No amount of training data will solve this. Intelligence isn't a word prediction game.

      • thedevilslawyer a year ago

        I've rarely seen this, mostly get answers correctly, and am able to use it for generation and reasoning. Can you share any link of a chatgpt/gemini conversation where this kind of circular conversation happened?

ilaksh a year ago

The best models are not language models but multimodal. The grounding of language in video data, new model architectures, and larger models will improve the robustness.

That HeyGen video does not suck. It's actually kind of hard to even tell it's AI if you are only looking at it for a few seconds.

The interesting thing comparing a human's learning to an AI is that AI skills and knowledge can be copied basically infinitely, whereas a human is a one of a kind.

I imagine some parents are putting in efforts with the goal of raising the most productive member of society they possibly can. AI teams have somewhat similar goals for the models they are training.

We could see AI take control of the planet within the next four years in order to end WWIII. We should just hope that they keep lots of us around in giant people zoos.

  • jasonhanleyOP a year ago

    I agree the multimodal stuff is amazing. I'm seriously impressed with the new Gemini 2.0 family of models and can't wait until the full multimodal capabilities are in general release.

    In terms of the HeyGen vid, it's passable, but that was something I literally whipped up in 10 minutes. You can make ones that are much, much better if you invest in creating better training material. The voice and video model in this case only used the one 3-minute source video.

    Funny you mention the "people zoo" thing. That's actually part of a sci-fi story I've been trying to write since I was in my teens. Roughed out here: https://youtu.be/2KLdaVs_ugw

    • tumnus a year ago

      Kurt Vonnegut did it first.

      • jasonhanleyOP a year ago

        You know what's wild? Not ever having heard of him, I can find out who he is and how this relates to the conversation in milliseconds.

        For others' benefit:

        "Kurt Vonnegut explored themes of humans being observed by extraterrestrial beings in a zoo-like setting in his novel Slaughterhouse-Five. ... While this scenario involves humans being placed in a zoo by aliens, it does not specifically depict artificial intelligence (AI) as the captors. However, Vonnegut did address the impact of automation and machines on human society in his debut novel, Player Piano."

        Amazing times we live in. Strange and scary, but also amazing.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection