Fyuse
fyu.seThe tech seems cool, but I'm not seeing what extra value I get over a scrubbable video. How are you using the spacial information to add value to the images?
It seems like they are registering the points-of-interest between frames/images so that they can create a consistently scaled set of output images even if you, for example, took a step further away from the filmed object. And then using these sort-of-normalized frames to interpolate between them.
Indeed, the Fyuse "image" seems to be "simple" javascript to swipe between 67 frames (jpg images). It claims to fill in the gaps between pictures, but it would be interesting to see how the pictures were taken. If you could use both axes at the same time, that would be something cool, but it seems to work only in one axis.
How is this different from taking a normal video, and letting user navigate forward and back by dragging?