What Works in VR: Lessons from $70+ Billion

18 min read Original article ↗

VR has been “five years away” for thirty years. In the 90s, virtual reality would revolutionize everything: gaming, communication, entertainment, work. In the 2010s, the Oculus Rift changed the game; for real this time. In the 2020s, the metaverse was coming because Zuckerberg bet the company on it, which made it inevitable.

After billions invested, multiple hype cycles, and genuine technological breakthroughs, VR remains largely where it’s always been: a promising technology that most people don’t actually use. The headsets got better, the ecosystem grew, but mainstream adoption hasn’t materialized.

Why?

The standard answer focuses on hardware friction: headsets are expensive, uncomfortable, and isolating. That’s true, but there’s more to it. Smartphones were clunky and expensive at first, but early adopters tolerated friction because the value proposition was clear.

VR’s challenge runs deeper. It promised presence but often gave emptiness. It promised new worlds but gave us virtual spaces without much to do. The industry kept asking “how do we make the technology better?” instead of “why would anyone actually want this?”

Without answering that second question, technical improvements only go so far.

I’ve been building in VR since 2013 when the first Oculus development kits shipped. Over a decade of developing experiences and watching people encounter VR at live events and demos, often for the first time. To me, it seems VR’s challenges aren’t primarily technical. The technology has been ‘capable enough’ for a while. The challenge is recognizing which experiences actually resonate with people and then building toward those insights.

VR headsets have improved dramatically. Modern devices are lighter, more affordable, and more capable than ever. Inside-out tracking eliminated external sensors. Pancake lenses improved visual clarity. Standalone headsets removed PC requirements. Yet adoption remains modest: the best industry estimates put active VR headsets globally around 25-30 million, compared to over 6 billion smartphones.

The go-to explanation focuses on friction: headsets are still somewhat heavy, expensive, and isolating. That’s true, but it’s not the full story. Early smartphones were expensive, had terrible battery life, and dropped calls. People tolerated that friction because the value proposition was clear: the internet in your pocket.

Even among headset owners who’ve overcome the initial friction, usage often drops significantly after the first few weeks. VR apps struggle with retention rates below 5% at day 30, roughly on par with mobile games. But unlike mobile apps, VR users had to invest hundreds of dollars and navigate setup friction just to try the app. If hardware were the main barrier, we’d see high engagement among those who overcome it. Instead, they abandon apps at abysmal mobile-game rates.

As hardware improved, the software library expanded, but not always in compelling directions. The Quest store reveals a pattern: AAA ports that sell on brand recognition rather than being better in VR, flat-screen games or legacy mechanics awkwardly adapted for the medium, promising indie experiments buried by platform algorithms, and utilities valuable for niche use cases but not mainstream adoption.

Something else is missing. The technology creates convincing presence in virtual spaces, but presence alone isn’t enough. Without compelling reasons to be there, perfect immersion feels hollow.

Gaming found an audience: Beat Saber sold millions, Half-Life: Alyx generated $40 million in its launch month, and flight sim communities remain dedicated. Immersion genuinely enhances certain gameplay. But gaming alone hasn’t driven mainstream adoption.

Beyond gaming, most use cases struggle to answer: “Why VR instead of the easier alternative?” Virtual meetings compete with Zoom. Virtual monitors compete with actual monitors. Shopping in virtual stores competes with websites that work fine.

VRChat succeeded by offering something harder to replicate: creative expression through avatars, embodied identity exploration, and the formation of genuine community. It works because it enables experiences difficult to achieve elsewhere, not because it simulates existing activities more conveniently or accurately.

For each proposed use case, the fundamental question is: why would someone put on a headset for this?

In October 2021, Facebook rebranded to Meta. Mark Zuckerberg bet the company’s future on the metaverse: a persistent, shared virtual world where people would work, play, socialize, and conduct commerce. Meta’s Reality Labs has accumulated over $73 billion in operating losses since 2020 on this vision.

The result was Horizon Worlds, which peaked at around 300,000 monthly active users (Instagram has 2+ billion). By mid-2023, an independent investigation found fewer than 1,000 daily active users in the English-speaking version. The challenge was building infrastructure (virtual worlds, avatar systems, virtual real estate) without a clear vision of what people would do there.

John Carmack, former Oculus CTO, warned in 2022: “Setting out to build the metaverse is not actually the best way to wind up with the metaverse … the metaverse is a honeypot trap for architecture astronauts … we need to concentrate on actual products rather than technology, architecture, or initiatives.”

While Meta invested billions in infrastructure, VRChat and Gorilla Tag built experiences people wanted. In January 2026, over 110,000 people put on headsets simultaneously to witness a live comet event in Gorilla Tag. Meta created a platform assuming people would fill it with meaning, but VRChat succeeded because users created meaning organically.

The marketing failures were equally revealing. Meta’s 2022 Super Bowl ad featured animatronic characters escaping their washed-up reality through VR, and was widely panned for depicting the metaverse as escapism for the lonely.

Meanwhile, corporate experiments showcased what went wrong. Wendy’s launched “The Wendyverse” in April 2022: a virtual restaurant with a basketball arena where you shot hoops with a “virtual Baconator” basketball. Critics called it “pathetically bad,” looking like “a Wii game from 2006.” The world stayed active long after the campaign ended, becoming a literal ghost town.

Meta had the resources to frame the metaverse for mainstream audiences. Instead, through marketing missteps and action without clear purpose, they taught the public that the metaverse was empty, derivative, and disconnected from human needs. For millions who never tried VR, Meta’s metaverse became their reference point.

In 2026, Meta is making a strategic correction.

In the first two weeks of 2026, Meta laid off approximately 1,500 employees (around 10% of Reality Labs’ workforce), closed three acquired VR game studios, and announced that Supernatural would no longer receive new content. They’re shuttering Quest for Business and Horizon Workrooms while reportedly refocusing their next Quest headset on gaming.

These cuts followed a year of reorientation. Throughout 2025, Meta shifted strategic focus from the metaverse to AI, investing billions in AI infrastructure. The January 2026 cuts were just the culmination of a year-long shift. Look at what’s being cut: Quest for Business (an enterprise play that never gained traction), Horizon Workrooms (a solution to a problem few had), and expensive AAA exclusives that cost more than they’d ever return.

The narrative that emerged: Meta is abandoning VR. But Palmer Luckey pushed back on this. Meta still employs more people working on VR than any other company “by about an order of magnitude.” The Quest 3 is their best headset yet, reasonably priced around $500. They’re not leaving; they’re changing approach.

Luckey argues the studio closures could benefit the industry long-term. Meta funding massive AAA exclusives “crowded out” third-party developers who couldn’t compete with blockbusters that cost more than they’d ever return. The shift away from first-party AAA content could create space for indie developers.

Meta is still producing hardware and still running the Quest platform. If this forces them to build better hardware, improve the platform experience, and let the ecosystem develop organically, instead of willing the metaverse into existence, that could be the best outcome.

In early 2025, Meta’s VP of Reality Labs called it a “make or break” year for VR. What followed proved it: strategic retreat with layoffs, budget cuts, and a company-wide pivot toward AI. Christmas 2025 saw Quest shipments decline 16% compared to 2024. Whether 2025 was genuinely make-or-break or provided convenient cover for an already-decided shift is unclear. Either way, the era of Meta funding massive first-party content is over.

Meta’s strategic shift raises questions about the industry’s direction, but they’re not the only major player.

Apple is two years into the Vision Pro. The technology is impressive (best-in-class passthrough, eye tracking, build quality) but their positioning reveals uncertainty about its purpose. They marketed it for productivity and passive entertainment: work hard with lots of virtual monitors, then relax by consuming content alone. Neither justified $3,500 or the friction of wearing a headset when laptops and TVs already serve these needs well.

Apple’s strategy of building premium hardware first and letting developers discover use cases worked with the Apple Watch, but the Watch had minimal friction and piggybacked on the iPhone’s billion-user install base.

Vision Pro requires active intent to use every time, isolates users from their environment, and doesn’t meaningfully enhance existing Apple devices. The Watch discovered its purpose quickly and iterated with cheaper models. Vision Pro is two years in at the same price point, with use cases still centered on “big screen for movies” and “virtual monitors.”

Apple also kept the developer ecosystem relatively insular, courting existing Apple developers rather than engaging the community that spent a decade learning what works in immersive spaces. They encouraged porting iPad apps, and thus launched with many flat screens floating in space.

Most recently, Google and Samsung launched Android XR, their latest attempt at an open platform for mixed reality. Google is partnering with device manufacturers like Samsung rather than building hardware themselves. Whether this succeeds where their previous attempts didn’t remains to be seen.

Meanwhile, VRChat and Gorilla Tag continue drawing over 100,000 concurrent users for special events. Beat Saber continues selling years after release. Flight sim and racing communities remain active. The technology works, specific applications have engaged audiences, hardware quality has improved, yet the three biggest tech companies pursuing VR are taking very different approaches.

Amidst this corporate uncertainty, the clearest signal comes from what’s already working. The patterns are there if you know where to look.

VR can create meaningful experiences when built with clear purpose. The successes are instructive.

VR therapy works because it enables experiences impossible or impractical otherwise. Exposure therapy for PTSD, phobia treatment, social anxiety practice: all have clinical validation. It combines physical presence in triggering scenarios with the safety of a controlled environment; patients confront fears through embodied experience that creates real therapeutic outcomes.

Training simulations work because they combine physical practice with high-stakes scenarios impossible to recreate safely. Surgeons rehearse procedures, feeling the pressure and precision required. Pilots experience engine failures and weather emergencies, adrenaline flooding their system as they work through protocols. The embodiment creates muscle memory and confidence that translates to the real world.

Social VR communities like VRChat show that meaningful connection is possible with the right conditions. VRChat hosts millions who’ve formed genuine friendships and creative communities. On New Year’s Eve 2025, nearly 150,000 people simultaneously spent the holiday in VRChat worlds (most in headset). People plan weekends around these destinations. It works because it prioritizes creative expression and authentic interaction.

Immersive games work when they leverage what only VR can do. Half-Life: Alyx created environmental puzzles requiring spatial reasoning and physical interaction: you don’t just click on objects, you manipulate them in 3D space, feeling the weight and mechanics. Flight and racing sims offer mastery and escapism through full-body presence in cockpits. These aren’t just games ported to VR; they’re experiences reimagined around embodiment.

The most revealing success is Beat Saber.

Beat Saber became VR’s first genuine mainstream hit. Meta acquired the studio in 2019. By 2021, the game had sold over 4 million copies across platforms. By 2022, it had generated over $255 million in revenue. Meta fell into a rhythm (pun intended) of releasing DLC packs, but never seemed to understand what made Beat Saber work, or build on those lessons at scale.

The game remains successful years later, not because of Meta’s stewardship, but in spite of it. The core experience was strong enough to sustain engagement even without meaningful evolution. Meta’s failure wasn’t that Beat Saber stopped working; it was that they had VR’s biggest success story but failed to institutionalize its lessons.

Beat Saber works because it’s fundamentally embodied music.

You’re not passively listening or watching. You’re moving in rhythm, your body synchronized to the beat, slicing blocks in perfect time. The experience is proprioceptive; you feel it in your muscles, your breath, and your heart rate. This is what VR does distinctively well: full-body presence. You’re not controlling an avatar. You’re there, moving through space, physically engaged. Your body becomes the interface. Or rather: the abstraction of the interface disappears.

Combined with music’s ability to entrain rhythm and create flow states, Beat Saber generates something genuinely new: an experience impossible on a screen or in physical reality. It’s not simulating something else. It’s legitimately novel and can only be done in VR.

The game works as both a solo flow state and shared community challenge. It doesn’t force social interaction; you can enjoy it alone. But it creates conditions for community:

  • Leaderboards enable asynchronous competition worldwide

  • Custom song communities formed around creating and sharing content

  • Players share videos of mastering difficult tracks

  • Shared challenges create collective experiences

  • Multiplayer modes add direct competition and cooperation

The experience is meaningful enough that connection emerges naturally. People bond over achievements. They share strategies and celebrate progress. The modding community is robust; people love adding their own music and visual customizations. For many, Beat Saber answered VR’s fundamental question: “Why put on a headset?”

Beat Saber hints at something bigger. It didn’t try to simulate going to a concert or replicate playing a real instrument. It asked: what if music could be a full-body experience, physicalizing rhythm in impossible ways, in impossible spaces?

This is VR’s actual promise: not better simulations of reality, but entirely new experiences that reality can’t provide. Not “what does the real world do that we can recreate?” but “what’s possible when we’re freed from constraints of the real world?”

VR works best when it combines three elements:

1. Clear embodied activity - Physical presence and movement that engages your body meaningfully, not just “being in 3D space”

2. Emotional hook - Connection to something people genuinely care about (music they love, challenges that matter, relationships, mastery, fear, joy)

3. Meaningful experience - Creates unique value that justifies the friction: either something impossible elsewhere, or something significantly better when embodied

What makes an experience “meaningful” in VR? It provides lasting value through replayability, social bonds, skill development, or real-world outcomes. It answers “why put on a headset for this?” with something compelling.

This framework isn’t a panacea; exceptions exist.

Quality ports like Resident Evil 4 VR can succeed through beloved IP and careful adaptation that genuinely enhances the experience through embodiment. Meta’s AAA investments (Asgard’s Wrath, Batman: Arkham Shadow, Deadpool VR) were often well-reviewed and high-quality, they just couldn’t carry the platform on their own. Those titles are valuable content for headset owners; not reasons to buy a headset.

But as a lens for evaluating VR concepts, this framework clarifies why most experiences struggle: they’re missing one or more elements. The more of these elements you have, and the better you execute on them, the more likely your experience is to resonate.

Put simply: VR succeeds when it creates value that justifies putting on a headset, by leveraging embodied presence in ways flatscreens can’t compete with.

Test your VR concept against the three elements: embodied activity, emotional hook, and meaningful experience. Then ask: what makes this better in VR than the easier alternative?

The opportunity isn’t in doing existing activities “but in VR.” Painting apps that are just 3D painting. Meditation apps that are just sitting in nature. Virtual concerts that just recreate standing in a crowded venue. These are only first steps, and fail to leverage the medium’s potential.

The question is: what becomes possible when you’re freed from physical constraints AND you design for embodied presence?

What if learning happened through proprioceptive memory instead of memorization? What if performance spaces let audiences move through and shape art rather than viewing it? What if social rituals were reimagined from scratch for impossible environments and scenarios, rather than recreating real-world gatherings?

Most VR creative tools hand their users a massive toolbox and leave them to “figure it out,” hoping the user can learn complex interfaces AND adapt to spatial thinking simultaneously. But what if the tool itself was the experience, with context and meaning built in? What if creative activity was guided, purposeful, collaborative?

The domains exist. The question is whether we’re willing to truly reimagine them for the medium, rather than only recreating them in VR.

VR’s lessons to date are validated by millions of users and billions of hours of usage. The technology is capable, the hardware is accessible. A few factors make the next phase different from what’s come before:

Generative AI enables experiences that weren’t possible in VR’s first decade. Procedurally generated worlds that evolve in real-time. AI-driven characters that respond naturally. Real-time asset generation that makes infinite content feasible. This opens creative possibilities that simply didn’t exist before.

Artists, musicians, designers, and storytellers who’ve built audiences on 2D platforms now have accessible tools to create immersive experiences. The infrastructure exists, the distribution is there, and most importantly: there’s an audience ready to engage. Individual creators can now build for VR without needing a full studio behind them.

Gen Z and Gen Alpha are growing up with VR as a known entity, not a sci-fi promise. They don’t carry the baggage of failed hype cycles. They judge experiences on whether they’re compelling, they create content natively for the medium, and they see it as simply another canvas. In their prime creative years, they’ll build in VR the way millennials built for mobile.

Meta remains VR’s dominant platform holder. Google and Apple are building their own ecosystems. The infrastructure they’ve created enables development while also constraining it: store policies, platform cuts, visibility algorithms. That’s the reality creators must navigate. But the post-hype clarity is liberating: no one needs to build “the metaverse” anymore. The failed vision creates space for a simpler target: does this experience justify putting on a headset?

VR’s biggest limitation isn’t technical; it’s imaginative. Most VR development is informed by experience from game development or film, bringing assumptions from those mediums. We need people from disparate fields who understand the human experience from different angles.

Look at what worked: Beat Saber succeeds because someone understood kinesiology and rhythm entrainment. VRChat succeeds because someone understood identity, creative expression, and community formation. VR therapy succeeds because someone understood psychology and embodiment.

The developers who’ll define VR’s next decade won’t just know Unity or Unreal. They’ll be informed about human psychology, kinesiology, sociology, and spatial design. They’ll look to immersive theater for presence and participation, LARP for embodied roleplay, escape rooms for collaborative problem-solving, and theme park rides for visceral experience.

VR isn’t game development in virtual space. It isn’t cinema in virtual space. It’s a new medium that requires new thinking.

Meta still employs more people working on VR than any other company. They’re still producing hardware, still running the Quest platform. Their strategic shift away from massive first-party AAA content creates space for independent developers to build for specific communities rather than chasing mass adoption.

This is VR’s actual opportunity: not the metaverse, not replacing smartphones, but creating experiences meaningful enough to justify the friction. The technology works. The distribution exists. The lessons are clear. What’s lagging is the imagination to build something genuinely new rather than porting the old.

VR’s first chapter was about technology. The second was about infrastructure and expensive lessons in what doesn’t work. The third will be about what gets built with those lessons by people who understand both the medium’s capabilities and human experience. We need to pull from diverse disciplines and ask not “what can we bring into VR?” but “what can VR make possible?”

The question was never whether VR would succeed. The question was always what form that success would take. After $70 billion spent searching for what to build, we’ve finally found why.

The music industry faces parallel challenges.

In a previous piece, I examined how streaming optimized for attention instead of presence, delivering infinite access while making music passive background noise. The pattern is identical: infrastructure lacking purpose, convenience without meaning, tech that doesn’t answer why people should care.

In the next post, I’ll examine the intersection of these industries: virtual concerts, 360 music videos, rhythm games, music-centric creative tools, virtual DJ apps, music visualizers, and everything else that lives where music meets immersive technology.

Understanding the evolution of these formats reveals the path to marrying presence, embodiment, and music in new ways that were never before possible.

I’m Tyler Doshier. I’ve worked at Apple and TuneIn, and I’m a DJ and music producer. I’ve also been building in VR for over a decade. Now, I’m building something new at the intersection of music and immersive tech. Liner Notes is where I explore the future of music, technology, and how we connect through shared experience.

Discussion about this post

Ready for more?