Settings

Theme

Lecture Notes on Randomized Linear Algebra (2013)

arxiv.org

163 points by iamaaditya 9 years ago · 18 comments

Reader

rajasinghe 9 years ago

This stuff is incredibly useful when dealing with large matrices. The idea is that an n-by-n matrix often doesn't contain n^2 pieces of independent information, but can be written a product of matrices of size at most n-by-r (for r << n). A famous example of this is the Netflix recommendation matrix. In this case, you can often avoid O(n^2) complexity by only dealing with such low-rank approximations.

It should be noted that this overview dates from 2013 and that a lot of new results have appeared since then. The author gives some good references in the abstract.

  • apathy 9 years ago

    Sparse x low-rank: if you collected every cell in the matrix, you paid too much for your sensor :-)

    Emmanuel Candes' lectures on compressed sensing changed my life.

  • sytelus 9 years ago

    Is this a more formal treatment for algorithm that Simon Funk gave?

    http://sifter.org/~simon/journal/20061211.html

    • jey 9 years ago

      That's more related to 'stochastic gradient descent' for 'matrix completion'. The key difference is that Simon Funk's algorithm doesn't treat missing entries as a zero, whereas using linear algebra based techniques on the observed data matrix (formed by putting zeros for unobserved entries) would try to predict the missing entries as zero exactly.

      Also related is the 'alternating least squares' algorithm.

  • dang 9 years ago

    Great comment! We put 2013 in the title above.

jey 9 years ago

Related but different:

Foundations of Data Science by Avrim Blum, John Hopcroft and Ravindran Kannan: https://www.cs.cornell.edu/jeh/book2016June9.pdf

  • putin 9 years ago

    Do you know what the prereqs are for high dimensional geometry? Any additional resources? From the looks of it, the subject seems to require the knowledge of some measure theory and functional analysis. Advanced undergrad/grad level math subjects. Threshold for entry here seems very steep (at least for high-dim geo).

    • ianai 9 years ago

      I'd say it's about a minor in mathematics and one or two semesters of very directed study.

shas3 9 years ago

This is a very good and timely compilation of all the important topics!

I ask this earnest question because I have a deep interest in randomized linear algebra, random projections, 'sketching'/sampling, compressive sensing, etc.:

Do any of you use it in industry applications? If so, at a high level, how do you use it?

I know I'm asking a "I have a hammer and that is a nail"-type question, but I am interested in seeing "deployable" applications of these topics. I don't have any to report, other than academic ones.

  • sshekh 9 years ago

    Compressive Sensing is widely used in imaging.

    https://en.wikipedia.org/wiki/Compressed_sensing#Application...

    • lp251 9 years ago

      Not a great example. Compressive imaging is (was?) a hot research area but hasn't made the transition to industry. I can think of only a single commercial product that relies on compressive sensing.

      • JustFinishedBSG 9 years ago

        Don't most lossy codecs make use of compressed sensing?

        • lp251 9 years ago

          It's tricky. You need to define what "compressed sensing" is. Of course, for 10 years, anything with a hint of sparsity has had the "compressed sensing" buzzword added, even though these ideas are decades old.

          Many lossy codecs utilize the fact that the signals of interest (audio, video, images, whatever) are sparse when viewed in some transform domain- Fourier or wavelet. We apply the transformation and retain only the largest coefficents- these get quantized and transmitted, and then we use the inverse transformation to reconstruct our signal. The 'loss' comes from the thresholding/quantization procedure. It's certainly sparsity driven, but I wouldn't call it "compressed sensing".

          "Compressed sensing" should mean "stable reconstruction of my signal using data that is acquired at an optimal rate". The "optimal rate" is roughly proportional to the sparsity of the signal.

          Check out the first section of [1].

          [1] http://vhosts.eecs.umich.edu/ssp2012//bresler.pdf

        • nullc 9 years ago

          No.

          Do they make use of the fact that smooth signals are sparse in DCT domain? Sure. But this has been true long before compressed sensing was a thing.

          AFAIK the specific techniques of compressed sensing have not made their way into industry at all.

          Not that they couldn't be applied, I'm fond of "Spatial Sparsity-Induced Prediction for Images and Video: A Simple Way to Reject Structured Interference".

          (And, in my view it seems that compressed sensing almost completely diverted academic attention away from techniques that would be useful for signal compression in industry; maybe with a couple more orders of magnitude improvement in computing power the common techniques in that space will become more useful for compression.)

    • shas3 9 years ago

      I'm reasonably well plugged into that community. All the applications there are 'academic' applications. There are efforts to build hardware based on this- startups, etc. However, the translation is still slow and nowhere near as fast as what we saw with deep learning (just comparing apples and apples with apples defined as step-change jumps in research).

      • lp251 9 years ago

        How many startups are operating in the CS imaging area? I only know of InView. Rambus likes to sell their lensless sensor as "compressive", but it doesn't really fit.

      • sjg007 9 years ago

        I mean you should be able to stream deep learning applications.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection