3D support for X11 guests
virtualbox.orgVirtualbox 3D progress has been unfortunately slow for a long time. It was kind of anticipated with Oracle's acquisition, as their laser focus on legal engagements and consulting pushes a lot of technical projects to the back burner.
I'd love to see a bit more alignment between VirtualBox and Qemu. Even though KVM and Qemu's virtualized graphics acceleration is still a WIP, having a shared code base could accelerate the project on things like SPICE (https://www.spice-space.org/download.html) and Virgl (https://virgil3d.github.io/). Unfortunately, I think it uses a lot of Linux-specific technologies (e.g. KVM, Gallium), so the likelihood of sharing at that level seems pretty low. Although, supposedly Gallium isn't locked into Windows.
Theoretically, if we could agree on a common host-guest interface for a virtual graphics adapter, it could share the guest implementations and host implementations could be added as needed. And it could be reused in multiple projects. But it always seems that this is the tech that never gets sufficient cross-project collaboration. And given the many differences between vendor hardware, a portable interface has been difficult. Maybe Vulkan will provide enough low-level functionality to ease the abstraction?
Without 3d support, basically no-one can use virtualbox, so is the dilemma whether to kill the project or not?
(This is a serious question--what can possibly be more important to average virtualbox users than having a working desktop environment? Virtualbox has always occupied the desktop virtualization niche, so I'm trying to figure out what has changed...)
I use Virtualbox daily without ever using a desktop environment in the VM. It's very useful in headless form for development machines when working locally.
I find this baffling when KVM is much more amenable to being scripted from the command line.
KVM isn't an option on OS X or Windows - in those cases, VirtualBox is the most mature free VM software available.
HAXM module and QEMU works fine for me on Windows.
Windows and OSX both come with native hypervisors now. Those are what eg Docker "Native" runs on
At least on Windows I find that Virtual box gives a better interactive desktop experience when running Linux as a guest.
Exactly. A few days using HyperV for my linux desktop guest was enough to convince me that I didn't need Docker for Windows THAT bad. I'm back to Virtualbox.
Portability of VMs shouldn't be overlooked, it's why VMware is still quite relevant and why when that is out of the question Virtual Box is very popular.
If my VMs can't easily be run on any host from backup they tend to be quite useless at the most critical of times.
Do you know of a virtualbox-like manager for xhyve?
While I do love KVM, it takes a lot more to get running properly than installing Virtualbox and Vagrant and being up and running with a whole ecosystem of every type of prebuilt box you could imagine.
Yep, the last time I tried to get a KVM machine going I had to hack XML files to get the correct SCSI controller just so I could mount an ISO. It's 3 clicks in VirtualBox.
That's not KVM that's libvirt which uses the XML files. KVM can be configured entirely from command line parameters.
virt-manager makes it much easier
Have you ever used VBoxManage?
I've tried to use VBoxManage, but mostly to no avail. I have some full-system backups with not-exported virtualbox vm's in them, and usually have to hand edit a bunch of undocumented xml to make them start. I guess vagrant makes this sort of thing better, somehow, but I'm moving everything I possibly can to docker, and then I'll deal with the remaining desktops using some other technology (maybe vagrant, maybe not).
As a light user of virtualbox, I get the impression using anything but the gui is really hard compared to other virtualization tools.
For instance, auto-starting raw-disk vms at boot on windows is a pain, and hyper-v is free. Its only real downside is lack of gui support for linux guests.
Similarly, you need to stand up separate backup infrastructures for your desktop and virtualbox, or you will be told "you're doing it wrong" when you try to restore.
I guess I could try to write a script to get a crash-consistent export of running virtualbox vms, but the host file system (btrfs, zfs, etc) already does an adequate job of that, so this is just useless administrative overhead and disk space waste from my point of view.
Anyway, when win8 came out, virtualbox couldn't handle the new start menu, and windows guest vdi was my only remaining use case for it at the time.
It is a shame, VirtualBox was my go-to vm solution for years, specifically for its debian/ubuntu host support (unlike vmware) and good desktop guest support (unlike all the other options).
I doubt that I'd even try to pull VMs out of a full-system backup without restoring it first. Over my head, for sure.
I've used VBoxManage to create, manage and control VMs on remote headless hosts. And work with VRDC desktops. Easy.
Backup/restore of host system should have nothing to do with VirtualBox. It's just files. But then, I use Debian hosts. For backups, I tend to use LUKS-encrypted SSDs via USB, and just copt the full VM folder. Works perfectly.
Vagrant. Many of us still have and use Vagrant.
vagrant is completely modular though... i use vagrant with libvirt/qemu at work for nested VMs
How do you set up a two-way synced directory in Vagrant without virtualbox?
IME, you can't.
I'm not a vagrant user, but your question intrigued me. A quick skim of the documentation for the vagrant-libvirt module shows a synced folders section[0], which appears to support configs for NFS and 9pfs for two-way sync. Do those not work? I use vanilla libvirt, and I've had great luck with exposing my development directories to VMs using the 9pfs resource sharing. If the vagrant plugin isn't doing that, then it may just be a small bug in the libvirt XML file they're generating. Or am I totally off-base?
[0] https://github.com/vagrant-libvirt/vagrant-libvirt#synced-fo...
Thanks, I'll look into it more closely. I tried the libvirt module and I remember having issues with it. I don't remember if there were no compatible debian boxes for it, or it didn't work properly on windows... something was there.
I see this as a vagrant issue though. It really falls short of its promise of being a dev environment where you "never say 'it works on my machine' again".
"can't" is a strong word.
why can't you do it in the user space within those VM's, communicating over a network adapter? If you don't care about putting on a raincoat, going outside to the Internet, and launching it up into the cloud, you can just install dropbox and get a sync'd directory without additional work. Obviously this is extra unnecessary overhead, two times, but for your use case you may not care.
you can alternatively roll your own solution though it might take you an hour or two. think about rsync over a local network adapter.
I realize these are hackish solutions but if it's stupid but it works, it's not stupid.
on the plus side you will control that traffic. having directory syncing that breaks the VM abstraction opens leaks due to potential oversights in how it's coded.
if I were in your predicament I would work around it.
Bidirectional syncing primary use case is for development. Requiring a bunch of hacks and overhead is a terrible idea and controlling the traffic is not a useful feature for that use case.
I understand that, but it's not reasonable to ask that the VM intrusively modify the filesystem without the guest operating system knowing about it: therefore the "correct" solution is some guest userspace utilities for the user to install, which perform these operations from inside the VM. (If a 5 in a file changes to a 6, then rather this change happening from outside the VM, to the total surprise of the guest OS, as though you pulled the hard drive, mounted it in another computer, modified that one value, and remounted it in the original computer, all without the guest OS even being aware that the hard drive had been unmounted or modified, instead, it should be performed by a utility from inside the VM.)
The difference between this proper approach, and what you call "hacks" is minimal, and basically a question of packaging. To be clear, I agree that the VM developer should write and package these utilities for every major guest operating system it supports.
You don't even have to rule out DEs wholesale. Lightweight ones like XFCE run quite well with no 3D support.
> Without 3d support, basically no-one can use virtualbox
What? I can only laugh at such a statement
Don't use hyped up desktop managers that need 3D to work.
My personal laptop (1 GHz celleron) doesn't even have graphics drivers compiled and just uses the efi framebuffer. It's really great! It runs cool and performs well, software graphics are underrated. Just don't run ridiculous stuff like GNOME that's full of crazy animations every time you do something simple
Hmmm. I have a 2016-vintage celeron 3050 (think macbook, but $189 with half as many cores) and even on bare metal, web browsing is unusably slow without hw acceleration. Sites like HN load instantly, but amazon product pages take 10's of seconds after ~2 tabs.
This is with firefox (they also broke 3d acceleration in linux); chrome is OK, and so is firefox if I force enable video acceleration and ignore the severe visual artifacts.
FWIW, Chrome is still a bit sluggish on a 12 core xeon with no GPU, but it is usable (unlike Firefox on the same VM).
Anyway, I used to get away with no gpu acceleration on my laptop, but it became untenable in ~ dec 2016. It still sorta works on the server-side web browser.
Amazon pages are sluggish even on a my core i7 machine with hardware acceleration that I use for gaming. I really don't think that's a good benchmark.
Try using 16-bit color depth instead of 32-bit
It should be an option. 3D hardware is the best way we have to accelerate 2d these days.
Both XFCE and Gnome 2 runs perfectly fine in Virtualbox.
Likely everything except GNOME 3.
Gnome 3 too, in X11 mode.
In Wayland mode, only if you are very patient. I'm not. Virtualbox support for Wayland is not there.
Why do you need 3d acceleration for using a system? GUIs work perfectly without any 3D acceleration. Are you planing on playong games? If so it is not a good idea in any VM.
I use a minimalist window manager, but I think I am in the minority, even among software engineers that use linux as their primary development environment. This almost works, except web browsers are extremely GPU heavy these days, so JavaScript laden sites (jenkins, amazon, news) often become unusable with more than ~10 tabs.
Anyway, most engineers at work use unity because it is the default, and it is a non-starter without hardware acceleration. We site-licensed vmware, and it seems to work OK.
The last time I checked, the second most common choice is fvwm. It sorta works out of the box and isn't unity. Also, it can run on the big server VM's, unlike unity (there is no 3d acceleration in the server kvm instances...)
Another common choice is to do everything via console, productivity be damned. (There are some highly productive console users, but they tend to put linux on their laptops on day one anyway)
Wow, Unity on a VM? I haven't seen that, actually I haven't seen any Linux user in my current and previous employers that would start any X apps on a VM.
Why start a browser there? Is it for testing the web pages in different environments? If not, then wouldn't it be more reasonable to start it on a bare metal (the machine you connect to the VM)?
But it might be my bias, I'm one of those that install Linux on a company laptop on day one :) so I'm most efficient in the console.
I have fine desktop environments without 3D. Maybe I'm just totally old-school, but I've never even felt the need for 3D on physical machines. But then, I use VMs mainly with VPN chains and Tor, for isolation and compartmentalization. If I really needed a 3D desktop environment, I'd use a dedicated box, and route through whatever anonymization path needed, using another box as multi-VM router.
Same. For the first time, because of VB, I have a Windows machine that I didn't wipe Windows from. I spend almost all my time in VBox. And I get to play with other distros risk-free.
But then, I spend most of my time in vim and localhost web pages.
I hope they get the help they are asking for- I wish I could do it. I think it's a great program and it would be great if it were here for a while.
It depends on the window manager. From my experience recently, lightdm without acceleration is snappier than cinnamon with acceleration.
It is free (compared to VMWare). It is easy to start with (compared to, say, Docker) It allows you to 'simulate' an architecture with multiple machines. Doing it all on one machine will have dependencies you didn't expect (eg shared env, everything localhost). It allows you to run a server application without cluttering your dev machine.
I've used a default Ubuntu desktop install in VirtualBox and it seems to work (though a little laggy). I haven't tried playing video games in it.
I don't understand how this is a "dilemma." It's like saying, "on one hand, I really need to mow my lawn. On the other hand, I really don't want to. It's a real dilemma!"
I think it's more akin to a free lawn mowing service that's having trouble cutting new types of grass.
It's like saying "On one hand, we really want to cut your grass for free. On the other hand, cutting new types of grass takes a significant amount of time that we don't have."
I get that it's really nice to have your cake and eat it too, but unless you're paying for said-cake I'm not sure that your critique is very accurate.
Don't get me wrong; I'm not demanding they fix this bug. They don't have the resources to prioritize it, and hey, that's their prerogative. It's a free product, after all. I just think calling it a dilemma is weird and makes it sound like there's some counter-argument to fixing the problem beyond "we don't have time."
They mow the grass on a park that's open to us all to use, but they don't have time to mow that bit over there. They're just asking if anyone else has time to do it instead.
Trolling people in an attempt to get their lawn mowed by others.
Obvious question, why not just run an X11 server on the host?
Perhaps Oracle and Microsoft (WSL) can collaborate on supporting an X11 server (ones already exist) for Windows 10.
That would work, but perform like caro. There are a large number of effects that need shared memory tricks to work efficiently in X. So, any "modern" desktop is going to be iffy running that way.
For windows, try mobaxterm. I tried a half dozen x11 servers a few years ago, and it was the best by far.
It works great, modulo 3d acceleration. If I remember right, cut and paste work well, so that (plus a file share for Downloads) gets rid of the need to run a web browser in Linux.
Similarly, it can use the windows wm to manage the x11 windows, so you automatically bypass the linux compositor.
[edit: Also, Hyper-V is extremely fast in this type of setup, because they focus on server performance, and this workload looks like any other network/io intensive server]
Can't you already do that? With DISPLAY variable. Although there could be less overhead in running local (guest) X server and stream only GL commands.
In some ways, many Linux beginners will choose to use VirtualBox to learn knowledge. If there is no good user experience with graphics hardware acceleration, we may lose many of the open source community supporters.
I finally gave up on virtualbox and bought vmware instead for my mac. It works much better.
Never worked well and we turn it off in our Vagrant boxes.
If you need 3D support in a VM on your workstation, use VMware Workstation 12... it's awesome (but Vagrant support sucks).
We switched to lxc for vagrant, it is a amazing both for GPU support, speed of startup, and low memory usage. Would never go back to running full VMs via VB or VMware.
You could wait years for some 3D support in VB, or you could just side step the problem and go with lxc.
Could you talk about how do you set this up (and how you use it) ?
I'm assuming that this is LiNux only.
Same. These days, I usually use my Linux system in a Windows host with raw-disk passthrough and VirtualBox. I've tinkered with the 3D acceleration but all it's done is break things for me.
My experiments with VMWare show that while it does well on some things that VBox struggles with, it has its own areas of weakness/slowness, essentially making it a wash.
When I redo my system in the next few months, I plan on doing a Xen setup with GPU passthrough to Windows and then using Linux in parallel instead of hosted out of VBox.
For the Vagrant VMWare support, are you using the official (paid) Vagrant provider or something else? I've been interested in hearing about experiences of using it.
I tried running Arch Linux on both VirtualBox and VMware Player on my Windows machine at work and Compton's performance is terrible. It's strange because it runs so well on VMware Fusion on my MacBook. Can someone recommend a better way to virtualize Linux desktop with composition in Windows?
Use VMWare workstation?
Nah, VMware Player and VMware Workstation use the exact same engine.
If you are having performance problems with VMware Player and are using either AVG of Avast antivirus then the problem might be the antivirus. See also [0]
I was very curious so I read through that thread.
One person found that turning Avast's Behavior Shield off helped.
But with AVG... wow. AVG ships with a disabled-by-default "Use HW assisted virtualization" option buried in its settings. As in, install AVG and it turns that off systemwide until you turn it back on. Yay!
Virtualbox is so far past dead to me, terrible networking issues and performance, dreadfully poor IO performance and oracle to top it off.
Switch from vb to lxc/docker and you get great GPU driver support, even CUDA.
... As long as you're running the same OS. Consider OS X and Windows hosts.
Completely different thing. Vb is VMs, lxc is containers
Really?
I was at one point trying to get GPU passthrough working with LXC and I could never get programs to actually run. Ubuntu 16.04, CUDA 7.5 or maybe 8.0, and a cheap GTX 730.
Maybe I'll have to try again. I was also trying this so that I could run my desktop environment in a container too.
There is no passthrough in LXC, because it is containerization, not virtualization. You are running the same kernel and kernel modules for both systems.
Yes, I had installed the Nvidia drivers on both the host system and the container. The CUDA programs run on the host, but didn't inside the container, even after creating the proper device nodes in /dev in the container.
My guess would be, that the user space driver uses additional mechanism to communicate with the kernel module. I don't thing that it hauls large data buffers via device files; I would try strace to see, where it fails.