Ask HN: An idea for a video compression algorithm
Yesterday, over a nice bottle of wine, I had this idea for a video compression algo. I'm far from a media compression expert, so I'm hoping someone here can chime in.
It's well known the fact that our brains make assumptions and predictions for stuff we see. Basically, it's said to have a physics engine, capable of predicting the next position of an object, based on current position and physics trends (like speed, acceleration, etc), so it's kind "predicting the future". This is all done, so to offload our brain from processing so many "useless" frames.
If our brain does that, why can't we create similar video compression schema? For instance, say we have 30 frames to encode under a second (30 fps), we could come up with a model where the encoder would only keep a few intermediaries frames, and decoder could predict the other ones based on some encoded physics metadata.
Does that make any sense ?
I did some research and found this paper (from 2002):
http://ieeexplore.ieee.org/document/1000006
So likely there are video encoders using this approach.
No comments yet.