Settings

Theme

Robot will beam live Moon pictures to Oculus users

bbc.co.uk

72 points by charlesmarshall 11 years ago · 25 comments

Reader

c0nc3rn3ng1n33r 11 years ago

I work on that robot. There has never been a serious attempt to use an oculus to control it. Also, Daniel did not build any part of that robot and is trying to take credit for other people's work. The BBC refuses to contact the Robotics Institute at CMU or Astrobotic to verify any details in the story.

  • teamonkey 11 years ago

    Not being open to receiving complaints about inaccuracies is against the IPSO Editor's Code of Practice.

      Clause 2 Opportunity to reply
      
      A fair opportunity for reply to inaccuracies must be given when reasonably called for.
    
    You can file a complaint here: https://www.ipso.co.uk/
  • blktiger 11 years ago

    I was going to say that using an Oculus to move the robot on the moon would be a bad idea because of latency. Good to know that it's just shoddy reporting.

  • bsimpson 11 years ago

    Will the feeds be available to consume with a Google Cardboard?

    • c0nc3rn3ng1n33r 11 years ago

      Calculate the bandwidth required to send two high definition streams from the moon. Add the bandwidth required for telemetry from the rover and lander. See what communication speeds are available to and from the moon.

      Also, try using an oculus with seconds of latency and see how pleasant it is.

      The real point is that Daniel Shafrir borrowed that robot and a video camera and proceeded to take credit for years of work by a team of students and staff at CMU and somehow got a BBC reporter to publish his claims without verifying any information.

      • anigbrowl 11 years ago

        That seems perilously close to academic fraud, although there's also the possibility that an ignorant reporter mistook speculative commentary for specific intentions.

robin_reala 11 years ago

Presumably they’ve got a 360º camera module that’ll be proxied through an Earth-local server, rather than actually sending movement controls to a stereo camera on the lander? If nothing else the multi-second latency would be nauseating.

  • raimondious 11 years ago

    I've thought about this a bit, but how can you achieve 360° stereo? If you have two 360° cameras side by side, as soon as you look 90° L/R, you no longer have depth and one eye would just see the side of the other camera. If you just have one 360° camera, you don't have any depth.

    • electrograv 11 years ago

      You could build a 360 degree 3D camera that works for any viewing configuration, but it would involve nontrivial image processing and wouldn't be totally free from small visual glitches/artifacts in some cases.

      1. Build an array of a few (let's say 8 or so) cameras pointing outward from the center.

      2. Use a stereo matching algorithm to extract a depth map from the perspective of each camera. Keeping track of the position and orientation of each camera in 3D space, these depths become a point cloud associated with each camera.

      3. Determine the 3D location and orientation of each "eye" you want to render, then render all point clouds in 3D space to reconstruct a "reprojected" version of the scene from any desired viewpoint. Of course, the farther the eyes deviate from actual camera locations the more stretched/warped the image will appear, but that won't matter much as long as you keep the eye coordinates within the physical space occupied by the camera.

      Honestly I'd be kind of disappointed if CMU doesn't actually try this. It's disappointing to think that perhaps all this buzz about "hackathons" (as the article mentions) is encouraging -- even at major research universities -- quickly slapping together components to make something kind of work, as opposed to fundamental algorithm development and proper engineering solutions.

      • btown 11 years ago

        If you pair it with a laser scanner (which won't have any atmospheric interference to deal with on the moon!) you can get a near-perfect depth map without needing to do stereo correspondence (whose accuracy can be thrown off if there are identical features in nearby points - insert joke about all moon rocks looking the same). You could even throw a still DSLR with a telephoto lens on a perfectly calibrated, positionable mount, aim it at rocks in sequence, and get high-resolution textures. For bonus accuracy, have it so that the DSLR mount and the laser-scanner mount can swap positions, so you're getting the textures from the exact same position as the scanner. Send the depth map and textures back to Earth in non-real-time. Program the robot to autonomously explore regions where the depth map hasn't covered. Then on the client side, just feed those textures into a gaming engine or render farm. Bam. That would be a scalable Moon Explorer.

      • acgourley 11 years ago

        That's what I was hoping to see as well. Doing it on the moon is oddly enough probably easier than many earth locations due to the lack of complex foreground objects.

    • mattnewport 11 years ago

      Current approaches are a bit of a hack. You have multiple stereo camera pairs and to handle intermediate directions you do some software interpolation between them. Despite being a hack it works well enough to be reasonably immersive. The Samsung Gear VR comes with a collection of 360 degree panoramic photos and some 3D videos using the same approach. John Carmack talks about it a bit in his Oculus Connect keynote.

      This approach is obviously wrong if you tilt your head but people don't generally do that enough to notice.

  • DanAndersen 11 years ago

    That's what I would think, but the image in the article looks more like it's two directional cameras on a tilt-and-rotate assembly. Additionally, the article mentions the need to stream two live camera feeds, suggesting it's not a single 360 camera. I'm also concerned about the latency causing nausea.

  • philo23 11 years ago

    Exactly this. It's a really great idea but a delay of just a few hundred milliseconds is enough to make even someone who uses the oculus rift frequently feel motion sick. Viewing static images in (for example: loading screens in games) are enough to make me personally feel nauseous, so I can't imagine what the delay would be between earth and the moon (a second or two?) would make this feel like.

  • psteynza 11 years ago

    Really guys? It's so easy. Stream to earth local server with x minute delay, and Oculus users interact with local feed. No more latency. Really not difficult.

    • DanAndersen 11 years ago

      That works only if the system is always capturing 360 degree video, and Rift users are using their own head-tracking to select which portion of the spherical view they're seeing. If they're actually using the head-tracking to send movement commands to the lunar camera, then the latency is unavoidable.

    • albertsimpson 11 years ago

      simple smart solution ;-)

sixdimensional 11 years ago

It looks like getting real-time HD video from the Moon may be more in reach than one might think using laser-based communication technology: http://www.nasa.gov/topics/technology/features/laser-comm.ht...

erikpukinskis 11 years ago

We really need voxel cameras. I think the rift will act as a forcing function for their developent.

zobzu 11 years ago

An occulus is just a dual display device (ie 2 cameras video feeds need to be transmitted)

There is no "beam" or magic. Its 2 cameras, 2 video feeds. That's great and exciting if as general public, we have access to those video feeds.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection