Settings

Theme

Show HN: Incremental learning of polynomials (and other LIP models) with IRMA

buschermoehle.org

24 points by MrBusch 5 years ago · 9 comments

Reader

MrBuschOP 5 years ago

I developed a new incremental learning approach (called IRMA) during my PhD in 2014 and haven't touched that research for a few years. But it has always been on the back of my mind as an approach worth following up on.

Now I decided to make it a bit more approachable through an interactive tool that lets you play with a polynomial that learns from incremental examples you provide. I also included some background on how the method works.

Incremental learning (in contrast to batch learning) poses a unique set of problems as the learning algorithm needs to adapt with just a single new example. Compared to the state of the art, IRMA does this through minimizing what it "forgets" about past learned data while adapting to the new example. I chose polynomials as an example as it doesn't work well with the typically used gradient descent but can be learned with IRMA in a much more stable manner.

The same approach has a closed form solution for a variety of other models (that are linear in the parameters, i.e. LIP) and I'd be interested to try and apply it to more models (like neural networks) or other tasks (like classification) as well.

I'm excited about any questions or feedback!

  • verdverm 5 years ago

    Interesting, I'll have to dig in some more. I have a similar story with Prioritized Grammar Enumeration (PGE) for Symbolic Regression. It's my PhD work that has been sidelined since 2015 and I've been thinking of resurrecting it.

    Nice work!

    • MrBuschOP 5 years ago

      Nice, I have only limited experience with symbolic regression. But, from what I gathered from the abstract of an ACM paper I found, I like the detour from the usual stochastic approach toward a deterministic directed search. Does that imply it could have problems with local optima, though?

      • verdverm 5 years ago

        There is a different sense of it, specific to PGE, but not anything like stochastic models. In PGE, the local search operators, those that expand equations (parse trees) by making small additions can get into situations where:

        1. a pretty good equation has been found 2. small modifications don't have much of an impact, so it stays good

        The solution I was thinking of is to do more bookkeeping and eliminate wasted work like this.

        In a sense, you don't really get stuck in the same was as GP / Stochastic algos because of the memoization. You always have to be trying new solutions (parse trees)

        Also wanted to explore DeepQ/RL for helping to guide the decisions of what to expand and where to expand it.

jjgreen 5 years ago

Reminiscent (in spirit) to [1], but there to the specific case of a histogram.

[1] https://ieeexplore.ieee.org/document/6971097

yorwba 5 years ago

I'm sure it's a nice visualization, but unfortunately I'm only seeing a white gap between the two sliders and tapping randomly doesn't seem to have any effect. I assume I should be seeing a polynomial graphed even before doing anything.

Tested on Android using both Firefox and Chrome.

  • MrBuschOP 5 years ago

    Sorry, just tested it myself and it seems like the link does not go to the correct https site. The full URL would be https://www.buschermoehle.org/andreas/irma.htm

    Might also just be that the web-server is not handling the traffic well. Some refresh's work, others fail to load the javascript.

    • MrBuschOP 5 years ago

      The problem should be fixed now - there was a race condition in the loading order of a javascript library.

saiojd 5 years ago

Nice work and great presentation.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection