RTFM: Pulling Real-Time World Models Into The Now

4 min read Original article ↗

Worldlabs' RTFM is a flexible take on realtime prompts

The crew at Worldlabs just published RTFM, a riff on how realtime agents can be re-scripted on the fly without breaking the conversation. Their approach treats instructions less like a fixed brief and more like a mutable field guide, which opens the door for designers to remix agent behaviour mid-session without losing the thread. Paired with their broader world-building manifesto in Bigger, Better Worlds, you can see the studio pushing toward adaptive sandboxes where narrative, tooling, and community edits collide.

Why their interaction model matters

  • Context tuning without restarts. They show how prompt fragments can be toggled, stacked, or retired during a session so that the agent keeps responding in rhythm. That is a huge unlock for filmic pacing where you want to sharpen or soften the dread beat-by-beat.
  • Design-first ergonomics. By emphasising interaction scaffolds over raw API calls, RTFM encourages creative teams to think in story arcs instead of parameter fiddling. The editor-as-notebook vibe keeps the focus on narrative outcomes rather than infrastructure.

Worldlabs RTFM Launch FAQ

What is Worldlabs RTFM actually launching?

RTFM is their new interaction layer that lets teams edit agent behaviour live, mixing reusable prompt fragments, tools, and profiling variables inside a single workspace. Instead of hard-coding prompts per scenario, you can dynamically assemble instruction sets that react to the conversation in real time, all from a designer-friendly UI.

How does the launch experience feel day one?

Worldlabs ships the launch with a canvas that behaves like a collaborative script board. You can drag instruction blocks, enable or disable behaviours, and watch the agent respond instantly. It feels lightweight, but the focus on responsiveness keeps creative teams inside the flow instead of jumping to a code editor.

Which teams should care about RTFM right now?

Anyone orchestrating AI companions for storytelling, support, or adaptive gaming can benefit. The launch positioning highlights narrative designers and product prototypers who need to tweak character tone, pacing, or guardrails without redeploying infrastructure each time.

Where do we see the roadmap heading?

They hint at deeper analytics for instruction performance, multi-agent choreography, and tighter version control. If the launch momentum holds, expect RTFM to collect more composable behaviours so designers can share libraries without writing new glue code every sprint.

Launch takeaways for builders

  • Reusable instruction blocks mean faster iteration. Teams can store the behaviours that work, duplicate them across sessions, and keep shipping new experiences without resetting the entire stack.
  • UI-first orchestration keeps designers in control. Rather than pushing patches through a dev-only workflow, the launch encourages cross-functional teams to co-own the agent voice and reactions.
  • Realtime evaluation closes the feedback loop. Because prompts update on the fly, you can measure which directives land, prune the ones that do not, and keep the experience coherent for the audience.

How to get started with Worldlabs RTFM

Worldlabs is inviting early adopters to explore RTFM through their launch page, with sign-ups funnelling into a guided onboarding. They emphasise sharing prompt kits with the community, so expect regular drops of pre-built behaviours and live office hours that walk through cinematic and support-driven examples. If you are experimenting with AI-led narratives, the launch is a good moment to request access and benchmark how flexible instructions might reshape your prototyping loop.

What it means for ScaryStories

We have been shaping ScaryStories Live around cinematic control panels that let directors choreograph tension like a live mix. RTFM reinforces that we should keep lowering the barrier for creative teams to jam with AI companions in the moment, not just pre-plan sequences. Expect to see us borrow some of that interaction flexibility as we keep evolving the multiplayer room tools.

📡

We are launching ScaryStories Live on Product Hunt.
Dive into the full rundown in our launch briefing: Product Hunt Launch.

Our friendly rivalry

Worldlabs shows that agent orchestration can be impressively flexible, even before a production-grade system is in place. Their RTFM launch focuses on modular text-based interactions, while our ScaryStories Live launch emphasizes rich interactive video—where every camera movement and sound cue updates in real time. You could say their system is flexible but not as feature-rich, while ours is feature-rich but not as flexible. That creative tension is what keeps us exchanging ideas and pushing each other forward, so the next wave of AI horror storytelling can offer the best of both worlds.