Settings

Theme

Sparse Matrices (2019)

matteding.github.io

120 points by tremguy 5 years ago · 20 comments

Reader

Const-me 5 years ago

Another useful representation for them is B+ tree https://en.wikipedia.org/wiki/B%2B_tree

Can be a single tree for the matrix where keys are tuples of two integers. Or one tree per row, where keys are single integers, column index.

Unlike CSR, inserting elements is Log(n). RAM overhead is larger than CSR but still reasonable, much smaller than hash maps or red-black trees would be. Similar to CSR, reading rows is mostly sequential RAM reads i.e. pretty fast.

  • enriquto 5 years ago

    But how often you want to "insert" elements in a sparse matrix? It seems like a very strange thing to do, I cannot imagine a situation where I would need to do that (during the lifetime of a sparse matrix).

    • Const-me 5 years ago

      I’m not saying B+ tree is the best structure for them. Just another one to consider. Which one is the best depends on use case.

      I used these trees in the code that builds large sparse matrices from something else. The domain was CAM/CAE, it was finite elements. As a nice side effect, with one tree per row I was able to parallelize the construction of that matrix, as different CPU cores can update different rows of the matrix in parallel.

      • enriquto 5 years ago

        Sure, B+ trees are cool. I was just wondering about the use case for element insertion (except maybe for the construction of the matrix). Most of my time is spent in algebraic operations with immutable matrices.

        Regarding the construction of the matrix, instead of insertions very often you can do it with tricks using kronecker products and the like.

        • Const-me 5 years ago

          > except maybe for the construction of the matrix

          Yep, that’s precisely why I used that thing.

          > Most of my time is spent in algebraic operations with immutable matrices.

          CSR is probably the way to go then.

          Also, if you do that math on CPU and your matrix is not randomly sparse but made of small dense areas of non-zeroes, consider less compressed SIMD-friendly variant of CSR, where instead of individual elements you keep dense SIMD vectors.

          For example, if you have AVX and need fp64 precision, keep 4 elements long slices of rows there. This way you can use 256-bit loads and FMA instructions to do that math.

          Or if you spending most of time multiplying matrices as opposed to matrix*vector, might be reasonable to keep 2x2 dense blocks as opposed to 4x1.

    • tobmlt 5 years ago

      Finite element assembly is one I see daily. There are many times In computational mechanics and also mesh manipulation (graphics programming, shape design, etc) when inserting into a sparse matrix is useful.

    • amelius 5 years ago

      How about the PageRank algorithm, where your crawler detects new links and you need to insert new weights into the matrix from which you then later compute page ranks?

      • im3w1l 5 years ago

        Wouldn't it be easier to first do your crawling and then construct the matrix in one go? But graphs more broadly sounds like an area where you might want to update the matrix yeah.

        • amelius 5 years ago

          Well, one exception could be that some parts of the web need to be crawled more often than others.

          But tbh, I think incremental updating of a matrix only makes sense if the other parts of your pipeline are also incrementalized.

layoutIfNeeded 5 years ago

“If the ratio of Number of Non-Zero (NNZ) elements to the size is less than 0.5, the matrix is sparse. While this is the mathematical definition, I will be using the term sparse for matrices with only NNZ elements and dense for matrices with all elements.”

No, this is not the mathematical definition. An n-by-n matrix is usually considered sparse if the number of nonzero elements is O(n). Which means that the ratio of nonzero elements goes to zero as the matrix grows.

mihevc 5 years ago

Nice post! For completeness: Apache Arrow is also adding sparse tensors in c++ [1] and wrapping them with python [2] (although the documentation for Python might be a bit lacking at the moment).

[1] https://arrow.apache.org/docs/cpp/api/tensor.html?highlight=...

[2] https://github.com/apache/arrow/blob/master/python/pyarrow/t...

eggie5 5 years ago

I've gotten the question in ML eng interviews 3 times: implement sparse vector and the respective dot product op. I always go for the handy DOK method.

  • teacpde 5 years ago

    I got the same question in a non-ml interview (although the interviewer works on ml), DOK is the solution I naturally thought of. But that isn’t enough, the interviewer was looking for LIL for less overhead and type uniformity.

  • rmelhem 5 years ago

    how one could prepare for such interviews? i know google is the place to start, but if you could point some source would really appreciate it!

    • im3w1l 5 years ago

      I think you are missing one of the two pieces the question builds on: Linear Algebra (do you know how a dot product is defined?) or Datastructures including their complexity. You would prepare by reading up on whichever of these two you are missing.

      If you know how it's supposed to work in theory but struggle to bang it out, you could do eg leetcode.

golovashkin 5 years ago

for a decent introduction to sparse matrix computations, see Tim Davis lectures

https://www.youtube.com/watch?v=1dGRTOwBkQs

mihevc 5 years ago

Yet another format is CSF (Compressed Sparse Fiber): http://glaros.dtc.umn.edu/gkhome/node/1177

It's a generalized CSR/CSC (to ndim > 2) format that uses a tree data structure where each path from root to leaf encodes a nonzero element.

mindv0rtex 5 years ago

Not mentioned in the article, the ELL format is very efficient for banded matrices.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection