Settings

Theme

3D Novel View Synthesis with Diffusion Models

3d-diffusion.github.io

106 points by dougabug 3 years ago · 13 comments

Reader

dougabugOP 3 years ago

This approach is interesting in that it applies image-to-image diffusion modeling to autoregressively generate 3D consistent novel views, starting with even a single reference 2D image. Unlike some other approaches, a NeRF is not needed as an intermediate representation.

oifjsidjf 3 years ago

>> In order to maximize the reproducibility of our results, we provide code in JAX (Bradbury et al., 2018) for our proposed X-UNet neural architecture from Section 2.3

Nice.

OpenAI shitting their pants even more.

  • astrange 3 years ago

    Oh, OpenAI does more or less release that much. People don't have issues implementing the models from their papers.

    What they don't do is release the actual models and datasets, and it's very expensive to retrain those.

rasz 3 years ago

This is one of the building blocks absolutely required for Full Self driving to ever work.

btw I like how it hallucinated bumper carrier mounted Spare Wheel based on the size of tires, heavy duty roof rack and bull bars while ground truth render was in a much less likely configuration of stock undercarriage frame hanger/no spare.

mlajtos 3 years ago

Ok, NeRFs were a distraction then.

  • dougabugOP 3 years ago

    No, NeRFs are more interpretable because they directly model field densities which absorb and emit light. In this respect they are something akin to a neural version of photogrammetry. They don’t need to train on a large corpus of images, because they can reconstruct directly from a collection of posed images.

    On the other hand, diffusion models can learn fairly arbitrary distributions of signals, so by exploiting this learned prior together with view consistency, they can be much more sample efficient than ordinary NeRFs. Without learning such a prior, 3D reconstruction from a single image is extremely ill-posed (much like monocular depth estimation).

dicknuckle 3 years ago

I'm entirely unfamiliar with this, but is there a future where we can take a few pictures of something physical, and have AI generate a 3d model that we can then modify and 3d print?

Asking as someone who's dreadfully slow at 3d modeling.

dr_dshiv 3 years ago

It seems like this be used to create multiple views for fine tuning Stable Diffusion (textual inversion), from a single image.

muschellij2 3 years ago

Soon to be the Face Back APP!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection