PulseAudio has been removed from dports
lists.dragonflybsd.org2000: Audio on linux sucks. If you are lucky you can have two applications generating sound at the same time. You can't hotplug audio devices. You can't have per application volume. You can't move audio streams to different output devices.
2016: Pulseaudio sucks because.. reasons.
It's an easy target. It sucks far less all the alternatives, but it's an easy punching bag for people who want something to hate.
I lived through the days of horrible sound support on Linux - I think it was my main desktop until my first Mac around ~2007. Playing video or games was a crapshoot. Hot-plugging audio wasn't even in the realm of possibility.
Pretty much. You can even send the audio from different applications to different audio devices on the fly, which is something that Windows 7 can't do by itself (you have to change the default audio output before starting each application).
Pulseaudio was forced down people's throats too early by making it the default in Ubuntu and Fedora. People are still bitter about this. That's your "reasons". If these people were still using some godawful combination of ALSA, eSound, OSS and jackd to do things sound-related nowadays they would be complaining just as loudly about that audio stack.
I have a love-hate relationship with PulseAudio, and it seems others do too.
It makes sense to have some user-space audio layer. Software mixing, sample rate conversion, and a few other things just make sense in user space, and we can expose only bona fide hardware capabilities through the device interface. Of course, most desktop audio chipsets have hardware mixing, which means that most people think that you can just use OSS (or ALSA on Linux), and it will work. It just doesn't work on everybody's hardware.
In short, there's a hardware jungle out there and software layers like PulseAudio smooth over the differences. It's a shame that it doesn't always work well, and most people think PulseAudio is the source of the problem because most people have hardware mixers.
By comparison, if you just use OSS (or ALSA on Linux) on my computer you might see that incredibly basic features like volume control or the ability to play audio from multple apps don't even work without PulseAudio.
PA is pretty painful, but credit where it's due: Now that linux's OSS support is pretty much dead and buried, it's pretty much the only way to do cross-platform sound on the unixes. Even if it is a baroque, overly complex mess (it is), it has helped to increase portability across systems.
Not that it matters, as Wayland and systemd slowly eradicate all cross-compatability in software complex enough to care.
Which is kind of ironic, when you think about it...
Vast amount of revisionism in this thread.
Lots of comments saying ALSA doesn't do mixing. This is false. ALSA has always supported mixing through dmix, and it's enabled by default.
Lots of comments saying ALSA doesn't do hotplugging. This is also false - hotplugging works like any other hardware on Linux. Plug it in and it shows up. To configure that card as the default requires changing a configuration file. You can make this happen automatically with udev.
Lots of comments saying ALSA doesn't work with bluetooth headsets. Google "bluez-alsa" folks. FFS.
Now, that's not to say that all of this worked seamlessly. Configuring everything through /etc/asound.rc or .asoundrc was a pain, in large part because there were no GUI tools to do so. And because applications read .asoundrc on startup, there was no way of switching a playing stream to a different card, live. THAT is the use case that a userspace daemon solves.
The upsetting thing is we already had two userspace daemons, aRts and ESD [KDE and GNOME respectively], that more or less worked fine. Instead we suffered with years of broken audio.
Understaffed BSD fork can't maintain notoriously complicated and finicky user-space audio daemon, decides to drop it instead. News at 11.
Honestly, I'm surprised DfBSD even supported PulseAudio (or vice-versa) in the first place.
If it works on FreeBSD it will probably work on Dfly. They're still pretty close in terms of userland, and I don't know this for certain but they probably take many FreeBSD changes and vice-versa.
Look at sndio if you want to see how sound should be done. Excusing the horrible mess pulseaudio is with the horrible state of Linux Audio doesn't cut it, when OpenBSD has been offering a superior and simpler alternative for years!
It doesn't reinvent the wheel and for anything more complex you can always use JACK, like if you really want to go low-latency.
> Excusing the horrible mess pulseaudio is with the horrible state of Linux Audio doesn't cut it,
The problem is that most people complaining about Pulseaudio don't offer realistic alternatives. Saying Pulseaudio sucks and that everyone should use ALSA is a joke. Pulseaudio does a plethora of things that ALSA does not handle. Not to mention that ALSA is a low-level system, and PA actually sits on top of ALSA.
In all of the vitriol that people spew about Pulseaudio, this is the first time that I've seen anyone point to sndio. I think that says something about the "pulseaudio complainers" crowd.
That said does sndio provide the following features:
- Support for bluetooth audio devices
- Support for streaming audio over a network.
- Support for user-land mixing of audio sources (i.e. don't need root).
- Mixing of multiple audio streams at the same time (e.g. Can your system play an alert sound without interrupting your music?)
- Per application volume settings
- Per application input/output source settings
I realize I'm replying to a 2 weeks old comment, so nobody will read this ever, but from your list sndio supports: streaming audio over a network, user-land mixing of audio sources, mixing of multiple audio streams at the same time, and per application volume settings.
> Per application input/output source settings
No, but the input/output device is selectable per application via the AUDIODEVICE environment variable.
> bluetooth audio devices
OpenBSD has no bluetooth support, so no. I'm also wondering why the kernel wouldn't create audio devices from these that the userland daemon can then just transparently use? Does an audio daemon need special support for bluetooth audio devices?
The sndio daemon has more features. Give the man page a read if you're interested: http://man.openbsd.org/OpenBSD-current/man8/sndiod.8
Honest question, why would one need another layer of PA in linux if alsa does mixing just fine. I've heard PA is useful for bluetooth devices. Any reason why it's needed beside that?
Removal is the best fix, always. Who needs features anyway.
Pulseaudio did not work on DragonflyBSD. At all. Hence the removal. A feature that doesn't work isn't really a feature, is it?
>Who needs features anyway.
Can you name a single feature or use case addressed by PulseAudio that isn't addressed by ALSA?
Unplugging my headphones and letting other people be able to hear what I'm listening to? I mean, with ALSA I can shut down whatever program I'm using, switch to speakers from headphones, then start the program up again but that's a bit cumbersome. Maybe it is possible with ALSA somehow but not with the programs and distros I'm familiar with. When PulseAudio came out all of a sudden Linux went from less convenient to use headphones and speakers with than Windows to distinctly more convenient.
I never had a laptop where plugging in headphones didn't disable the speaker (and nothing I could do about that in software).
If that really is the case you could use dmix and have audio played on both the headphone and speaker output. You would still have to mute the speakers when plugging in the headphones though.
> If that really is the case you could use dmix and have audio played on both the headphone and speaker output. You would still have to mute the speakers when plugging in the headphones though.
You're stating that Pulseaudio isn't necessary because someone could do the same thing with a combination of dmix and manual intervention? Have you considered the possibility that someone might like the convenience of not needing to do manual intervention?
Huh? I never stated that Pulseaudio isn't necessary.
I just pointed out a possible solution/workaround with ALSA and dmix for redirecting the audio output to two devices/pcm outputs.
After years of using just ALSA and messing around with its config on different computers I am now using Pulseaudio too. Once it's running it's a lot easier to use and more flexible, especially when it comes to hot-plugging and switching between inputs and outputs.
I assume they're either using USB headphones or some other case of having multiple audio output devices, like my desktop with onboard sound and a separate PCI sound card.
That does not apply to non 3.5mm headphones and even those are software driven now in many notebooks.
PulseAudio features I have used in the past that I don't think ALSA has:
* playing audio from/to another Bluetooth device (with bluez)
* playing audio on a PulseAudio server running somewhere on the network
* transparent encoding of audio output as DTS (for surround support over SPDIF)
I also seem to remember plain ALSA having trouble with multiple applications playing sound at the same time, which is just pathetic.
I'm sure some of these could in theory be built on top of ALSA, but fact is that today's Linux distributions use PulseAudio, and for me it always worked well.
I'm not sure about ALSA's capabilities in this area, but one thing PulseAudio does well is moving playing audio streams between sound devices. (For example, moving it from my monitor's speakers to my USB headphones.) Being able to control mixers on a per-application basis is also quite nice.
How do I make my bluetooth headset (which supports both A2DP and Handset profile) work with ALSA so it switches to headset profile when mic is active, reverts to A2DP when it's not and switches back to laptop audio when BT is off?
How's the situation for hotplugging sound devices in ALSA nowadays? The top Google results for "alsa hotplug" all point to editing configuration files or udev rules or shell scripts.
On the flip side, if everything can be done by ALSA, and PA is seemingly objectively worse, for what reason are people building their software against PA? Why is it being included anywhere at all?
Except ALSA cannot do everything that PA does. ALSA is low-level Linux sound API that PA builds on. Read the post above, it explains the difference.
Basically, if you want multiple sounds playing at the same time (e.g. music + desktop notifications) and you don't have hardware mixer (many common sound chips don't and rely on Windows drivers/sound system to provide the functionality), you are out of luck (no, the ancient dmix plugin is not a solution!). Bluetooth audio (headsets) doesn't really work without PA.
Configuring apps to use different devices (music should go to speakers and video conferencing to a headset ...) is a pain without PA - most applications don't let you select the input/output devices.
And many other things. ALSA is good, but without PA the sound support on a modern Linux desktop would be stuck right in the late 90s. Is it necessary? Not strictly, but it is one heck of a convenience that most don't even realize they have.
Here is a good post from 2008 explaining many of the issues PulseAudio solves and addressing some of the old FUD:
http://0pointer.de/blog/projects/jeffrey-stedfast.html
Unfortunately, there are tons of people with strong opinions about both PA and systemd but very little actual knowledge about what these components do and what issues they address in a modern Linux/Unix system. But that doesn't prevent them from spreading BS FUD and conspiracies about this or that group trying to dominate the market or take over the competing Linux distros.
>no, the ancient dmix plugin is not a solution!
Given that that's exactly what it's for, why isn't it?
Because many devs are on Linux distros, that include PulseAudio by default. Support for *BSD ports is often a bit weak for software like that.
The power of defaults.
Why write for ALSA when the distro ships with PA, and so all of its configuration tools are PA-focused?
Also pulseaudio allows them to sorta support OSS and whatever else pulseaudio will output to for free.
Basically it's a RedHat conspiracy to make open-source programs not work on non-Linux.
I don't see how building your program against ALSA (Advanced Linux Sound Architecture) would make it more compatible with non-Linux OSs.
Maybe @lmm wishes that Redhat threw support behind OSS4? Is that supported under the *BSDs?
We (dragonfly users) want pulseaudio, but no one wants it enough to debug and fix it. It's a small community removing broken stuff, it would be brought back if someone fixed it.
This is a dumb comment and akin to predicting the iphone would fail because it doesn't have flash support.
Pulseaudio is a very finicky troublesome bit of open source software. The alternative (ALSA) is much more friendly to work with. Any seasoned sysadmin probably welcomes the death of Pulseaudio.
Just so you and others know, the L in ALSA stands for Linux. It doesn't run on DragonFlyBSD. I think DragonFlyBSD uses a re-implementation of the OSS API that they inherited from FreeBSD. The other BSDs use /dev/audio, which was originally a Sun API.
Let's be fair: PulseAudio is the way to go if you have to have a partitioned audio system because you are using a multiheaded multiuser system where each seat has a monitor and keyboard and mouse and mic and speakers, and each user needs to be able to control their own volume mixing without root privileges.
For every other case I've seen, ALSA is better and causes fewer problems.
I have already encountered the need for multiheaded system. I wanted to run a second X server on my third screen in order to have a separate mouse. I have not found any website explaining this kind of setup. I have never succeeded: the mouses were active on both X servers.
You need to have a separate xorg.conf for your separate X instance. (Edit) and you need to hardcode each device, not allow autodiscovery.
ALSA is not an alternative to Pulseaudio.
If your audio chipset doesn't have a hardware mixer, and about of half of laptop chipsets don't, then you can't play more than one stream at a time. If you have an array microphone, the only way it'll work at all is with Pulseaudio. If you want Bluetooth audio, only Pulseaudio bothers to support it.
Pulseaudio uses ALSA anyway (and on DFBSD it uses OSS instead); it's not a replacement, it is the only unix sound server other than Apple's(and maybe OpenBSD's) which actually works. ALSA is an audio device driver ABI; it can only expose what your chipset supports in hardware.
The problem of Pulseaudio spinning, at least when it used to happen on Linux, is due to issues in the individual audio device drivers. The reason this is a big problem is that Pulseaudio needs to run at a low `nice` value in the scheduler, or you will get buffer underruns (which sound like pops and clicks). Unfortunately this means that if Pulseaudio goes into a hot loop, it will consume all of the available CPU resources until the loop breaks. This problem can be mitigated by running pulseaudio with a control group to limit CPU time (as is done with systemd on many Linux distros). Unfortunately, DragonFlyBSD doesn't have the same thing configured, assuming their kernel supports it, so you get 100% CPU spinning.
Furthermore, DragonFlyBSD doesn't have ALSA; they have OSS.
> If your audio chipset doesn't have a hardware mixer, and about of half of laptop chipsets don't, then you can't play more than one stream at a time.
Not true. ALSA with dmix can do software mixing, and dmix is enabled by default.
I also really miss ESD, which also enabled software mixing (this was before dmix) but was far simpler and wasn't this over-complicated mess like PulseAudio.
dmix is terribly broken.
JACK is a (better) alternative to PulseAudio, works in BSD, works with ALSA and OSS, and is superior in every way.
The reader is encouraged to download a KXStudio LiveUSB image, and see all the cool things JACK allows that PulseAudio cannot/will not catch up to.
JACK doesn't have Bluetooth support, nor array microphone support. It is also terrible on battery, and doesn't perform correctly on a non-realtime kernel.
That's not to say JACK isn't useful; but it isn't a solution to the desktop audio problem, it's a solution to the professional audio problem.
JACK also doesn't have the semantics and auto-configuration code which makes Pulseaudio work out of the box, and it is considerably more difficult to make JACK control clients. These are more reasons why even though JACK existed before Pulseaudio, it was not integrated with common user applications.
> JACK is a (better) alternative to PulseAudio
For some things, but not for others. They are really not even competing to be in the same space. Such blanket statements do not help.
I've played with JACK a bit, and I've always found it pretty obtuse, but that was a while ago.
The obvious question, though, is: is there a emulation layer for JACK that supports the PulseAudio API?
There is some limited interoperability by nesting PA within JACK: http://www.jackaudio.org/faq/pulseaudio_and_jack.html
JACK existed pre-Pulseaudio and it wasn't used by very much software. Most things only supported ALSA / OSS / eSound / whatever KDE's sound system was.
I wouldn't really consider ALSA as an alternative to PA, considering that PA generally sits on top of ALSA instead of replacing it. aRts and ESD of the olden days would be more comparable.
PulseAudio is not a replacement for ALSA. At the moment ALSA is the only kernel level API for talking with the hardware drivers that's fully supported by upstream (= the kernel devs).
You _can_ install OSS4 if you want to, at the expense of loosing proper power management support; not a big issue on Desktops, but it can reduce a laptops battery runtime significantly.
Either way PulseAudo does not know how to talk to audio hardware by itself (well, it sort of does as far as Bluetooth attached devices is concerned, but the whole Linux Bluetooth stack "BlueZ" is quite finicky on its own). It needs (just like Jack, esd, aRTs, Xaudio* and all the other sound servers out there (*= dead and burried)) some way to talk to the hardware. And that, at the moment usually is ALSA.
When ALSA was first developed the original plan was to implement all the must haves (mixing, resampling) though the userspace module support. Unfortunately it ended up in an unholy mess and never worked satisfyingly. To anyone who's spent considerable amount of time developing audio software (in hindsight) that's not a big surprise. Audio raises very hard and tight deadlines. The way ALSA dmix implements mixing through IPC mechanisms is extremely prone to buffer underruns because of some participating process not getty enough CPU cycles in time to complete to write out the next bunch of audio.
PA does plaster over the biggest cracks in the foundations of the Linux low level audio infrastructure. And given the circumstances it does an admirable job there; if it doesn't break things even more. But it's only treating the symptoms and not curing the underlying issues.
Here's the laundry list of what's required to fix the major bumps in the road (which also PA has to navigate):
- major simplification and cleanup of the kernel side driver model. Too many things are left optional for the drivers to implement
- tight requirements on how kernel drivers must behave (there's a lot of variation in the runtime behavior of certain drivers; some return immediately from read write functions, others have significant delay, timestamps reported may be increment coarsely based on a per-buffer-length base, others deliver smooth playhead/recordhead position updates)
- mixing / fanning _must_ happen in or close to the kernel (the reason for that is explained in the next point). The biggest issue here is that resampling may be required. As long as it's upsampling it's rather unproblematic actually (the only DSP challenge there is not creating artifact frequencies). Downsampling is a much harder problem, because it involves lowpass filtering, i.e. discarding information and the challenge is only to affect the stuff above the Nyquist frequency.
- the audio infrastructure must be able to reschedule processes based on the deadlines and time left until new samples are required. Audio is probably the most demanding of the soft realtime applications. A single missed sample is easily noticeable even to the most untrained pairs of ears. If your buffers underrun you get pops or crackles. However on an interactively used system acceptable audio timing mismatches are in the order of 5ms before an untrained brain is able to pick them up. Raising audio applications to realtime priority alleviates the issues, but the better solution is to keep statistics on the audio frame lengths read/write by a process, how much time it takes for a process to prepare the next frame and use that to augment the scheduling. However this raises the issue being able to circumvent priority privileges through the audio system; a mitigation would be that frame sizes are quantized to a minimum frame size that correponds to the amout of CPU time the process would be alotted in regular scheduling (that's a tough one and I've been strugling with it for some time).
When it comes to OSs with a monolithic kernel a very good question is, if the last two parts absolutely have to happen in the kernel or if it's possible to have them in userspace. If we want to implement it in userspace, then, because of the rescheduling issues this will require the addition of userspace based rescheduling capabilities the syscalls required for this.
Of course with a proper audio infrastructure present through standard kernel and/or driver interfaces the need for a system like PulseAudio vanishes; at least as far as mixing and resampling are concerned. It's still desireable to have an audio manager process that knows how to decide on which device to play certain audio (telephone calls go to the headset, music goes to the speakers and so on).