Dyson 360 eye robot vacuum
dyson360eye.comI worked on this product back in 2002-2005. I developed the initial wheel system for this form factor of product and early prototypes of their digital motor once we realised we'd need a very high power density compressor motor. Sadly the battery technology has taken a long time to get to the point where it can now give a useable run time, even when the vacuum only consumes 100W. Also the prices of other components, high power embedded CPUs, cameras and sensors have reduced dramatically since then. It uses an intelligent algorithm to maximise the potential of the runtime, meaning that it tries to elminiate running over the same patch of floor more than once. This is what it uses the 360 camera for and SLAM image processing that I still don't fully understand :) The chap with grey hair switching it off at the end of the teaser video is the brains behind all the navigation and image processing software, Mike Aldred, a very clever guy.
[edit] missed a word
Simultaneous localization and mapping (SLAM) essentially refers to various algorithms to determine egomotion (how much a robot moves in an environment) using sensors, while building a map at the same time.
Essentially, at each time step, the algorithm senses its environment and checks how much it differs from the previous time step, and figures out if it saw any new features to add to the map and how much the correlated features between the two time steps moved, to infer egomotion. This doesn't have to be necessarily with cameras, it can also be done with laser rangefinders and other relatively accurate sensors.
Monocular SLAM (MonoSLAM, also the name of a well known paper) is SLAM done with a single camera, which makes the problem harder than with two cameras. With two cameras affixed to a rigid frame and known characteristics, it's possible to determine the 3D position of any given feature that is seen by both cameras at the same time. With a single camera, however, it's trickier because only the angle of a given feature can be determined, not its 3D position, so an optimization step has to be done to determine what the likeliest solution to the problem is.
There's also more to read on the relevant Wikipedia article, at http://en.wikipedia.org/wiki/Simultaneous_localization_and_m...
OK so my understanding is pretty much the gist of that. See how you've moved by comparing features extracted from a series of images taken over time. I just don't understand the maths :)
The reason we went with a single camera is lack of space. As you can see from some of the imagery of the product, the camera stack is a huge proportion of the machine. Also when the algorithms were being developed in the early 2000's cameras were still expensive bits of kit. I seem to remember the first one being 1024x1024 resolution, pretty poor for photography, but good enough for feature mapping with SLAM.
There's a video? Can you post a link? The navigation on that page is horrible, and all I'm seeing is the same front-on image of the robot vacuum. Strangely, if I go to the Australian site https://www.dyson360eye.com/en-AU, I get a different image, but still only shows the one.
I'd like to see the video and learn more.
Plenty here: https://www.youtube.com/user/dysonteam
I'm seeing the markup language, not the actual text... FF 32 on Win7. I.e., the whole site is filed with:
[en-gb|Vision_headline] [en-gb|Vision_subhead]
[en-gb|Vision_body]
Nice video showing how its system works: https://www.youtube.com/watch?feature=player_embedded&v=oguK...
Neat. I wonder how it will compare to the Roomba. I own a Roomba 770 and am pretty happy with it so far. I am assuming the filter is probably going to be better on the Dyson.
The 360 is supposed to come out in 2015 and only in Japan at first so no regrets about purchasing the Roomba :)
Wow, I really want that Mac Pro, I mean vacuum (http://www.apple.com/mac-pro/)