Settings

Theme

Genie: Generative Interactive Environments

sites.google.com

82 points by kuter 2 years ago · 16 comments

Reader

jasonjmcghee 2 years ago

> Genie is capable of converting a variety of different prompts into interactive, playable environments that can be easily created, stepped into, and explored

If these are generating a fully interactive environments, why are all the clips ~1 second long?

Based on the first sentence in your paper, I would have expected a playable example as a demo. Or 20.

But reading a bit further into the paper, it sounds like the model needs to be actively running inference and will generate the next frame on the fly as actions are taken- is that correct?

polygamous_bat 2 years ago

Firstly, do these models learn a good physics grounding for nonsense actions? Like keep pressing down even when you are in the ground? Or will they phase you through the ground?

Secondly, why are all videos like half a second long? I thought video generation came much farther than this. My guess would be that the world models unravel at any length longer than that, which is (and has always been) the problem with models such as these. Minus the video generation part, we had pretty good world models for games already, see Dreamer line of work: https://danijar.com/project/dreamerv3/

  • jparkerholder 2 years ago

    Author here :) Re: 1) typically no, but of course it can hallucinate just like LLMs. 2) Agreed but the key point missing is Dreamer is trained from an RL environment with action labels. Genie is trained exclusively from videos and learns an action space. This is the first version of something that is now possible and will only improve with scale.

    • polygamous_bat 2 years ago

      Thanks for braving the crowd here, you will unfortunately only find hard questions.

      Anyway, about my second question: why are the videos only half second ish long? Does the model unravel after that?

      Also

      > This is the first version of something that is now possible and will only improve with scale.

      11b params is already pretty large considering the stable diffusion and LLM scale. How much higher do we need to scale until we get something useful beyond simple setups?

      • jparkerholder 2 years ago

        The bigger issue is lack of generating novel content rather than a total "unravel". We focus on OOD images because our motivation is generating diverse environments, but these are much harder to play for longer vs images closer to the training videos. It is interesting because one of the things you gain when going from 1B->10B is the OOD images working at all. Note it is not even trivial to detect the character given our model does not train with any labels or have any inductive biases to do so.

        Point of clarification -- we don't expect bigger models to be the only way to improve this and are working on innovations on the modeling side, however we don't want to overlook the significance of scaling either :)

        • YeGoblynQueenne 2 years ago

          >> Note it is not even trivial to detect the character given our model does not train with any labels or have any inductive biases to do so.

          Why not add inductive biases then and make your life easier? What's with this choice to try and do everything the hard way, presumably to make a point? In the end the point made is so specific that it translates to nothing that is usable in real problems.

          See MuZero for example- sure, you can learn without being given the rules explicitly, just from the win/loss signal, but then that only works in board games and atari games, and without the chance of a snowball in hell that it will work in the real world. We're dazzled by the technical prowess, but real utility? Where is that?

nycdatasci 2 years ago

The results seem quite bad. Compare the static image and "game" in this one example: Static Image: https://lh3.googleusercontent.com/c0GV4hG0Xg0eqpsUS1z62v6aJ2... "Game": https://lh5.googleusercontent.com/L_WsAa1saPmj29DSKda_fzk15y...

In the video, the character becomes a pixelated mess. In the static image, the character is clearly on rocks in the foreground, but in the "game" we see the character magically jumping from the foreground rocks to the background structure which also contains significant distortions.

The extremely short demo videos make it slightly harder to catch these obvious issues.

  • polygamous_bat 2 years ago

    What is the video resolution, 64x64? And even then it becomes blurry. Seems like another Google flag-plant-y paper filled with hot air that we will never see the source code or model for because it will expose how poor its capabilities are relative to competitors.

    The internal politics at these places must be exhausting. Industry research was supposed to be free from the publish or perish mindset, but it seems like it just got replaced by a different kind of need for posturing.

    • jparkerholder 2 years ago

      Hey author here :) First, tough crowd, love it, always great to get feedback because we are actively working on improving the model. We are very happy to admit it is not perfect, but given not many people thought this was possible a year ago, I am quite excited to see the next step of improvement. This is like the GPT1 of foundation world models, and we have a fair few ideas in the works to speed up progress.

      The resolution is 90p but we use an upsampler to make it 360p for examples on the website.

      • nullptr_deref 2 years ago

        How can I get started with this kind of research? Is it even possible without a PhD? Thanks.

        • jparkerholder 2 years ago

          If we did a good job then the paper should be written in a way that is digestible. When you don't understand things, follow the references to learn more (and there's probably videos covering most of the components we use).

          In the Appendix we have a case study that should be possible to re-implement and run with a single GPU/TPU. We are hoping the community can build from that and innovate. If you take these steps and get stuck, feel free to get in touch!

sqreept 2 years ago

I've read twice the announcement and I can't tell what this is good for. Can you please dumb it down for me?

snide 2 years ago

I'm old an immediately assumed this would link to historical retrospective of GEnie

https://en.wikipedia.org/wiki/GEnie

mdrzn 2 years ago

Seems very interesting, but as soon as I see "Google Research" or "Deepmind" now it's an instant turndown. Too much PR, not enough substance. Not targeting directly you guys with this research, but the company you work for.

joloooo 2 years ago

Looking forward to following your progress. I've been wanting to see how we might replace polygons for gaming long term, this seems like a step in the right direction.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection