Settings

Theme

Show HN: less than 650 LOC trainable GPT only using NumPy

github.com

90 points by joennlae 2 years ago · 18 comments

Reader

cuuupid 2 years ago

I think people are forgetting that transformer architectures are a wider field from GPT and predate GPT3 by 3+ years. Referring to transformer architectures using a branded commercial nomer (GPT) is just going to help cement OpenAI’s brand exposure and soon regulatory capture.

For comparison this would be like referring to convonets as Inception architectures back during the CV boom (or VGGnets before that)

  • tverbeure 2 years ago

    FWIW: the GitHub project description says “GPT-like”. It’s the title here that dropped the “like”.

  • Paul-Craft 2 years ago

    Mo Gawdat has famously said that GPT-4 was something like "4300 lines of code," and that he could have written that when he was a kid. He's clearly a smart man, so, I think we could extrapolate his comments to claim that a smart college student with some CS knowledge could have written it. These sorts of "GPT in $X LOC" demos pretty much confirm it.

  • __loam 2 years ago

    Regarding regulatory capture, I listened to an interview with Lena Khan, the current head of the FTC, and this exact thing came up as something regulators are worried about. I think regulators are aware of the danger of letting industry insiders regulate their own industry, so I'm hopeful for some sensible regulations that help promote rather than harm competition. The FTC also exists to prevent monopoly.

  • PartiallyTyped 2 years ago

    The most interesting thing in this whole saga is that decoder only models (aka causal transformers like GPT) are as effective as they are.

  • jimmyl02 2 years ago

    One small difference is that the GPT architecture is just the decoder stack of the original transformer as opposed to the full encoder decoder stack in the original.

    I agree the branding play on GPTs in general is pretty smart and strong from OpenAI though.

    • cchance 2 years ago

      Honestly i feel like the fact that everyone is just calling LLM's GPT at this point doesn't really help OpenAI, ChatGPT would, but the fact is that unlike "googling" something became synonymous for searching on the internet, GPT != OpenAI-ing something, GPT just became what people call LLM's it seems like lately, the fact the term isn't the name of the company or the full name "chatgpt-ing" sort of breaks that hold i feel like.

  • joennlaeOP 2 years ago

    The author here: I absolutely agree with you. I went for a bit more catchy title.

  • quickthrower2 2 years ago

    Is GPT subject to trademark. It stands for Generative Pre-training Transformer?

gfaure 2 years ago

Nice! The README mentions `LayerNorm` is implemented here, but while it's in the equivalence tests with PyTorch, I don't see it in the implementation.

  • dauertewigkeit 2 years ago

    It's part of the TensorLi definition where all the magic happens.

    • joennlaeOP 2 years ago

      That is true. I went for a simple implementation of the layer norm and included it in the tensorli definition. But it would have been better to define it as a moduli for clarity.

p1esk 2 years ago

I wonder how easy it would be to port this library from numpy to cupy.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection