Settings

Theme

Neural network from scratch

sirupsen.com

208 points by bretthopper 4 years ago · 42 comments

Reader

cercatrova 4 years ago

If you actually want to understand and implement neural nets from scratch, look into 3Blue1Brown's videos as well as Andrew Ng's course.

https://www.3blue1brown.com/topics/neural-networks

https://www.coursera.org/learn/machine-learning

  • LittlePeter 4 years ago

    I completed Andrew Ng's Coursera course and one thing it did not do is make me understand neural nets from scratch. Probably, you and I have different interpretation of "from scratch".

  • MarkMc 4 years ago

    I actually tried to implement a neutral network from scratch by following 3blue1Browns videos, and using the same handwritten number data set. But I got stumped when I realized I didn't have a clue how to choose the step size in gradient descent, and it's not covered in the videos. Despite that problem I'd say the 3B1B videos are excellent for learning the fundamentals of neural networks.

  • segal 4 years ago

    I found Andrew Ng’s Deep Learning Specialization much better for understanding neural networks than the machine learning course. https://www.coursera.org/specializations/deep-learning

alephxyz 4 years ago

"from scratch" but uses autograd and glosses over backpropagation.

  • Imnimo 4 years ago

    Yeah, the autograd choice struck me as odd. Given how simple the model is, it feels like it would have been easy to show how to compute gradients. The whole benefit of having this super simple toy problem is that we can reason about the meaning of individual weights - it's a perfect opportunity to build clear intuition about gradients and weight updates. Switching to torch is just substituting one black box for another - to a novice reader, the torch code is just magical incantations.

    • ogogmad 4 years ago

      This could be the start of a breath-first approach, where you start with very little code, and then dig deep into things like autograd or "backprop" as you get interested in such details.

      It seems to me that trying to give explicit formulas for gradients is just swamping the beginner with unnecessary details that don't help to build intuition. I think the author made exactly the right choices.

      It used to be that some NN tutorials would swamp the beginner with backprop formulas, which beginners were forced by their professors to memorise. I don't think this succeeded at doing much; it only made the subject seem more complicated than it needed to be; and I think it should all be abstracted away into autograd.

  • bigdict 4 years ago

        import torch as scratch
        from scratch import nn
  • charcircuit 4 years ago

    He doesn't implement matrix operations, floating point addition / multiplication either.

    • godelski 4 years ago

      The difference is that autograd isn't something you should already know if you're learning neural networks. Many "from scratch" tutorials implement backprop because this is a key part. I think your comment is a bit facetious and you're not acting in good faith.

      • charcircuit 4 years ago

        I could definitely see an argument that knowing gradient descent is a requisite just as much as knowing matrix multiplication.

        Both of these mathematical concepts are being abstracted over and are being used by the author's neutral network implementation.

        Now whether you consider using other people's libraries for this as being "from scratch" is up to you.

    • weberer 4 years ago

      I bet he didn't even create the universe.

bigdict 4 years ago

    import torch as scratch
    from scratch import nn
srvmshr 4 years ago

Interesting find. Just FYI, this repo has been the OG for several years, when it comes to building NN from scratch:

https://github.com/eriklindernoren/ML-From-Scratch

sgdjgkeirj 4 years ago

For autograd from scratch, see https://github.com/karpathy/micrograd and/or

https://windowsontheory.org/2020/11/03/yet-another-backpropa...

bullen 4 years ago

I think NNs are going to be a challenge as complexity grows.

I'm trying to make mobs behave autonomously in my 3D action MMO.

The memory (depth) I would need for that to succeed and the processing power to do it in real-time is making my head spin.

Let's hope Raspberry 5 has some hardware to help with this.

At this point I'm probably going to have some state machine AI (think mobs in Minecraft; basically check range / view then target and loop) but instead of deterministic or purely random I'm going to add some NN randomness to the behaviour so that it can be interesting without just adding quantity (more mobs).

So the inputs would be the map topography and entities (mobs and players) and the output whether to engage or not, the backpropagation would be success rate I guess? Or am I thinking about this the wrong way?

I wonder what adding a _how_ to the same network after the _what_ would look like, probably a direction as output instead of just an entity id?

  • YeGoblynQueenne 4 years ago

    Have you tried using something more efficient and precise, for example a flocking algorithm?

    https://www.oreilly.com/library/view/ai-for-game/0596005555/...

    Neural nets and machine learning in general are good for problems whose solutions are hard to hand-code. If you can hand-craft a solution there's no real need for machine learning and it might simply take up resources you need more elsewhere.

    • bullen 4 years ago

      This is actually something I see as complementary, so the AI on the server has to decide the general direction and then on the client these flocking simulations can be applied for scale if you have the processing power.

      I'm thinking each group of mobs has a server directed leader, who's position is server deterministic and then to save bandwidth the PvE minions and their movements can be generated on each client, just tracking when they are killed.

      I'm not scared of desyncing in the details. As long as the PvP stuff is coherent.

  • bsenftner 4 years ago

    Having experience writing crowd simulations in both games and VFX in film, I suggest you take a look at Massive, and perhaps read some of their documentation to learn how this successful implementation handles crowd simulations: https://www.massivesoftware.com/

adamiscool8 4 years ago

I remember doing this in PHP(4? 5?) for my undergrad capstone project because I had a looming due date and it was the dev environment I had readily available. No helpful libraries in that decade. Great way to really grok the material, and really lets me appreciate how spoiled we are today in the ML space.

gbersac 4 years ago

Interesting read, but there's a few things I haven't understood. In the training [function](https://colab.research.google.com/drive/1YRp9k_ORH4wZMqXLNkc...):

1- In the instruction `hidden_layer.data[index] -= learning_rate * hidden_layer.grad.data[index]`where was the `hidden_layer.grad` value updated?

2- from what I've understood, we'll update the hidden_layer according to the inclination of the error function (because we want to minimize it). But where are `error.backward()` and `hidden_layer.grad` interconnected?

parasdahal 4 years ago

For those interested in simple neural networks to CNN and RNNs implemented with just Numpy (including backprop):

https://github.com/parasdahal/deepnet

rg111 4 years ago

If you are interested in learning what makes a Deep Learning library, and want to code one, for learning experience, you should check out- Minitorch [0].

[0]: https://github.com/minitorch/

perfopt 4 years ago

How important is it to learn DNN/NN from scratch? I have several years of experience working in the tech industry and I am learning DNN for applying it in my domain. Also for hobby side projects.

I did the ML Coursera course by Andrew Ng a few years ago. I liked the material but I felt the course focused a little too much on the details and not enough on actual application.

Would learning DNN from a book like 1. https://www.manning.com/books/deep-learning-with-pytorch or 2. https://www.manning.com/books/deep-learning-with-python-seco...

Be a better approach for someone looking to learn concepts and application rather than the detailed mathematics behind it?

If yes, which of these two books (or alternative) would you recommend ?

zbendefy 4 years ago

Nice! I made a gpu accelerated backpropagation lib a while ago to learn about NNs, if you are interested check it out here: https://github.com/zbendefy/machine.academy

iamricks 4 years ago

I was disappointed when i realized this isn’t Sentdex’s NNFS. He makes really good content.

lindwhi 4 years ago

You can't definitely start Neural network without learning other concepts.

shimonabi 4 years ago

I did this for my AI class. You can watch the result here: https://www.youtube.com/watch?v=w2x2t03xj2A

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection