Settings

Theme

Opus 1.5 released: Opus gets a machine learning upgrade

opus-codec.org

387 points by summm 2 years ago · 150 comments

Reader

yalok 2 years ago

The main limitation for such codecs is CPU/battery life - and I like how they sparsely applied ML in it here and there, combining it with classic approach (non-ML algos) to achieve better tradeoff of CPU vs quality. E.g. for better low bitrate support/LACE - "we went for a different approach: start with the tried-and-true postfilter idea and sprinkle just enough DNN magic on top of it." The key was not to feed raw audio samples to the NN - "The audio itself never goes through the DNN. The result is a small and very-low-complexity model (by DNN standards) that can run even on older phones."

Looks like the right direction for embedded algos and it seems to be a pretty unexplored one, as compared to the current fashion to do ML E2E.

  • indolering 2 years ago

    It's a really smart application of ML: helping around the edges and not letting the ML algo invent pheonems or even whole words by accident. ML transcription has a similar trade-off of performing better on some benchmarks but also hallucinating results.

    • kolinko 2 years ago

      A nice story about Xerox discovering this issue in 2003, when their copiers began slightly changing random numbers in copied documents

      https://www.theverge.com/2013/8/6/4594482/xerox-copiers-rand...

      • cedilla 2 years ago

        I don't think machine learning was involved there at all. As I understand it, it was an issue of a specifically implemented feature (reusing a single picture of a glyph for all other instances to save space) being turned on in archive-grade settings, despite the manual stating otherwise.

      • h4x0rr 2 years ago

        https://youtu.be/zXXmhxbQ-hk Interesting yet funny CCC video about this

    • yalok 2 years ago

      fwiw, in ASR/speech transcription world, it looks reverse to me - in the past, there was lots of custom non-ML code & separate ML models for audio modeling and language modeling - but current SOTA ASRs are all e2e, and that's what's used even in mobile applications, iiuc.

      I still think the pendulum will swing back there again, to have even better battery/larger models on mobile.

spacechild1 2 years ago

I'm using Opus as one of the main codecs in my peer-to-peer audio streaming library (https://git.iem.at/cm/aoo/ - still alpha), so this is very exciting news!

I'll definitely play around with these new ML features!

  • RossBencina 2 years ago

    > peer-to-peer audio streaming library

    Interesting :)

    • spacechild1 2 years ago

      Ha, now that is what I'd call a suprise! The "group" concept has obviously been influenced by oscgroups. And of course I'm using oscpack :)

      AOO has already been used successfully in a few art projects. It's also used under hood by SonoBus. The Pd objects are already stable, I hope to finish the C/C++ API and add some code examples soon.

      Any feedback from your side would of course be very appreciated.

Dwedit 2 years ago

I just want to mention that getting such good speech quality at 9kbps by using NoLACE is absolutely insane.

  • qingcharles 2 years ago

    I was the lead dev for a major music streaming startup in 1999. I was working from home as they didn't have offices yet. My cable connection got cut and my only remaining Internet was 9600bps through my Nokia 9000 serial port. I had to re-encode the whole music catalog at 8000kbps WMA so I could stream it and continue testing all the production code.

    The quality left a little to be desired...!

  • kristopolous 2 years ago

    I wanted to see what it would sound like in comparison to a really early streaming audio codec, realaudio 1.0

        $ ffmpeg -i female_ref.wav - acodec real_144 female_ref.ra
    
    And if you can't support that I put it back to wav and posted it: http://9ol.es/female_ref-ra.wav

    This was seen as "14.4" audio, for 14.4kb/s dialup in the mid-90s. The quality increase over those nearly 30 years for what you can get out of what's actually a fewer number of bytes is really impressive.

    • anthk 2 years ago

      I used to listen opus avant agarde music from https://dir.xiph.org at 16kb/s under a 2G connection and it was usable once mplayer/mpv cached back the stream for nearly a minute.

    • recursive 2 years ago

      I don't know any of the details but maybe the CPUs of the time would have struggled to stream the decoding.

rhdunn 2 years ago

I find the interplay between audio codecs, speech synthesis, and speech recognition fascinating. Advancements in one usually results in advancements in the others.

luplex 2 years ago

I wonder: did they address common ML ethics questions? Specifically: Are the ML algorithms better/worse on male than on female speech? How about different languages or dialects? Are they specifically tuned for speech at all, or do they also work well for music or birdsong?

That said, the examples are impressive and I can't wait for this level of understandability to become standard in my calls.

  • jmvalin 2 years ago

    Quoting from our paper, training was done using "205 hours of 16-kHz speech from a combination of TTS datasets including more than 900 speakers in 34 languages and dialects". Mostly tested with English, but part of the idea of releasing early (none of that is standardized) is for people to try it out and report any issues.

    There's about equal male and female speakers, though codecs always have slight perceptual quality biases (in either direction) that depend on the pitch. Oh, and everything here is speech only.

  • radarsat1 2 years ago

    This is an important question. However, I'd like to point out that similar biases can easily exist for non-ML, hand-tuned algorithms. Even in the latter case test sets and often even "training" and "validation" sets are used for finding good parameters. Any of these can be a source of bias, as can the ears of evaluators making these decisions.

    It's true that bias questions often come up in ML context because fundamentally these algorithms do not work without data, but _all_ algorithms are designed by people, and _many_ can involve data in setting their parameters. Both of which can be sources of bias. ML is more known for it, I believe, because the _inductive_ biases are less than in traditional algorithms, and therefore are more keen to adopt biases present in the dataset.

    • MauranKilom 2 years ago

      As a notable example, the MP3 format was hand-tuned to vocals based on "Tom's Diner" (i.e. a female voice). It has been accused of being biased towards female vocals as a result.

    • thomastjeffery 2 years ago

      Usually regular algorithms aren't generating data that pretends to be raw data. That's the significant difference here.

      • shwaj 2 years ago

        Can you precisely define what you mean by "generating" and "pretends", in such a way that this neural network does both these things, but a conventional modern audio codec doesn't?

        "Pretends" is a problematic choice of words, because it anthropomorphizes the algorithm. It would be more accurate and less misleading to replace "pretends to be" with "approximates". But then it wouldn't serve your goal of (seeming to) establish a categorical difference between this approach and "regular algorithms", because that's what a regular algorithm does too.

        I apologize, because the above might sound rude. It's not intended to be.

        • thomastjeffery 2 years ago

          I was avoiding the word "approximate", because that implies a connection to the original raw data.

          A generative model guesses what data should be filled in, based on what is present in its own model. This process is totally ignorant of the original (missing) data.

          To contrast, a lossy codec works directly with the original data. It chooses what to throw out based on what the algorithm itself can best reproduce during playback. This is why you should never transcode from one lossy codec to another: the holes will no longer line up with the algorithm's hole-filling expectations.

      • samus 2 years ago

        Not really. Any lossy codec is generating data that pretends to be close to the raw data.

        • thomastjeffery 2 years ago

          Yes, but the holes were intentionally constructed such that the end result is predictable.

          There is a difference between pretending to be the original raw data, and pretending to be whatever data will most likely fit.

          • samus 2 years ago

            And that's why Packet Loss Concealment is only used to fill in the occasional lost packet. That way, the occasional vowel or the end of a syllable could be bridged over. Other improvements exist to prevent packet loss in the first place that are much less in make-samples-up territory.

            • thomastjeffery 2 years ago

              Yes, I agree that this use case is entirely reasonable. I also agree that it's reasonable to be concerned in the first place.

  • unixhero 2 years ago

    Why is the ethics question important? It is a new feature for an audio codec, not a new material to teach in your kids curriculum.

    • nextaccountic 2 years ago

      Because this gets deployed in real world, affecting real people. Ethics don't exist only in kids curriculum.

    • unethical_ban 2 years ago

      I get your point, but the questioner wasn't being rude or angry, only curious. I think it's a valid question, too. While it isn't as important to be neutral in this instance as, say, a crime prediction model or a hiring model, it should be boilerplate to consider ML inputs for identity neutrality.

    • gcr 2 years ago

      This is a great question! Here's a related failure case that I think illustrates the issue.

      In my country, public restroom facilities replaced all the buttons and levers on faucets, towel dispensers, etc. with sensors that detect your hand under the faucet. Black people tell me they aren't able to easily use these restrooms. I was surprised when I heard this, but if you google this, it's apparently a thing.

      Why does this happen? After all, the companies that made these products aren't obviously biased against black people (outwardly, anyway). So this sort of mistake must be easy to fall into, even for smart teams in good companies.

      The answer ultimately boils down to ignorance. When we make hand detector sensors for faucets, we typically calibrate them with white people in mind. Of course different skin tones have different albedo and different reflectance properties, so sensors are less likely to fire. Some black folks have a workaround where they hold a (white) napkin in their hand to get the faucet to work.

      How do we prevent this particular case from happening in the products we build? One approach is to ensure that the development teams for skin sensors have a wide variety of skin types. If the product development team had a black guy for example, he could say "hey, this doesn't work with my skin, we need to tune the threshold." Another approach is to ensure that different skin types are reflected in the data used to fit the skin statistical models we use. Today's push for "ethics in ML" is borne out of this second path as a direct desire to avoid these sorts of problems.

      I like this handwashing example because it's immediately apparent to everyone. You don't have to "prioritize DEI programs" to understand the importance of making sure your skin detector works for all skin types. But, teams that already prioritize accessibility, user diversity, etc. are less likely to fall into these traps when conducting their ordinary business.

      For this audio codec, I could imagine that voices outside the "standard English dialect" (e.g. thick accents, different voices) might take more bytes to encode the same signal. That would raise bandwidth requirements, worsen latency, and increase data costs for these users. If the codec is designed for a standard American audience, that's less of an issue, but codecs work best when they fit reasonably well for all kinds of human physiology.

      • cma 2 years ago

        What if it is a pareto improvement: better improvement for some dialects but no worse than the earlier version for anyone. Should it be shelved or tuned down so all improvement for each dialect see gains by an exactly equal percentage?

        • viraptor 2 years ago

          Here's a question that should have the same/similar answer: Increasingly some part of the job interviews is being handled over the internet. All other things being equal, people are likely to have a more positive response to candidates with more pleasant voice. So if new ML-enhanced codecs become more common, we may find that some group X has a just slightly worse quality score than others. Over enough samples that would translate to lower interview success rate for them.

          Do you think we should keep using that codec, because overall we get a better sound quality across all groups? Do you feel the same as a member of group X?

          • shwaj 2 years ago

            I don't think it's a given that we shouldn't keep using that codec. For example, maybe the improvement is due to an open source hacker working in their spare time to make the world a better place. Do we tell them their contribution isn't welcome until it meets the community's benchmark for equity?

            Your same argument can also be used to degrade the performance for all other groups, so that group X isn't unfairly disadvantaged. Or, it can even be used to argue that the performance for other groups should be degraded to be even worse than group X, to compensate for other factors that disadvantage group X.

            This is argumentum ad absurdum, but it goes to show that the issue isn't as black and white as you seem to think it is.

            • viraptor 2 years ago

              A person creating a codec doesn't choose if it's globally adopted. System implementors (like for example Slack) do. You're don't have to tell the open source dev anything. You don't owe them to include their implementation.

              And if their contribution was to the final system, sure, it's the owner's choice what the threshold for acceptable contribution is. In the same way they can set any other benchmark.

              > Your same argument can also be used to degrade the performance for all other groups,

              The context here was Pareto improvement. You're bringing a different situation.

              • shwaj 2 years ago

                The grandparent provided an argument why we might not want to use an algorithm, even if it provided a Pareto improvement.

                I suggested that the same argument could be used to say that we should actively degrade performance of the algorithm, in the name of equity. This is absurd, and illustrates that the GP argument is maybe not as strong as it appears.

                • viraptor 2 years ago

                  The argument doesn't make sense in practice. We could discuss it as a philosophy exercise, but realistically if the current result is better overall and biased against some group, you can just rebalance it and still get an overall better result compared to status quo.

                  Changing codecs in practice takes years/decades, so you always have time to stop, think and tweak things.

            • gcr 2 years ago

              One thing the small mom-and-pop hacker types can do is disclose where bias can enter the system or evaluate it on standard benchmarks so folks can get an idea where it works and where it fails. That was the intent behind the top-level comment asking about bias, I think.

              If improving the codec is a matter of training on dataset A vs dataset B, that’s an easier change.

        • samus 2 years ago

          I would be very surprised if there is no improvement if the codec is biased towards particular dialects or other distinctive subsets of the data. And we could certainly be fine with some kinds of bias. Speech codecs are intended to transmit human speech after all. Not that of dogs, bats, or hypothetical extraterrestrials. On the other hand, a wider dataset might reduce overfitting and force the model to learn better.

          If the codec has the intention of working best for human voice in general, then it is simply not possible to define sensible subsets of the user base to optimize for. Curating an appropriate training set has therefore technical impact on the performance of the codec. Realistically, I admit that the percentages of speech samples of languages in such a dataset would be according to the relative amount of speakers. This is of course a very fuzzy number with many sources of systematic error (like what counts as one language, do non-native speakers count, which level of proficiency is considered relevant, etc.), and ultimately English is a bit more important since it is de-facto the international lingua franca of this era.

          In short, a good training set is important unless one opines that certain subsets of humanity will never ever use the codec, which is equivalent to being blind to the reality that more and more parts of the world are getting access to the internet.

      • TeMPOraL 2 years ago

        I get your point, but in the example used - and I can think of couple others that start with "X replaced all controls with touch/voice/ML" - the much bigger ethical question is why did they do it in the first place. The new solution may or may not be biased differently than the old one, but it's usually inferior to the previous ones or simpler alternatives.

    • The_Colonel 2 years ago

      Imagine you release a codec which optimizes for cis white male voice, every other kind of voice has perceptibly lower fidelity (at low bitrates). That would not go well...

      • panzi 2 years ago

        Yeah, imagine a low bitrate situation where only English speaking men can still communicate. That would create quite a power imbalance.

      • overstay8930 2 years ago

        Meanwhile G.711 makes all dudes sound like disgruntled middle aged union workers

      • numpad0 2 years ago

        No offense/taken, but Codec2 seem to be affected a bit for this problem.

    • samus 2 years ago

      This is actually a very technical question since it means the audio codec might simply not work that well in practice as it could and should.

  • kolinko 2 years ago

    As a person from a different language/accent who has to deal with this on a regular basis - having assistants like Siri not understand what I want to say, even though native speakers don't have such problem... Or before an advent of UTF - websites and apps ignoring special characters usable in my language.

    I wouldn't consider this a matter of ethics, and more of a technology limitations or ignorance.

frumiousirc 2 years ago

How about adding a text "subtitle" stream to the mix. The encoder may use ML to perform speech-to-text. The decoder may then use the text, along with the audio surrounding the audio drop outs, to feed a conditional text-to-speech DNN. This way the network does not have to learn the harder problem of blindly interpolating across the drop outs from just the audio. The text stream is low bitrate so it may have substantial redundancy in order to increase the likelihood that any given (text) message is received.

  • jmvalin 2 years ago

    Actually, what we're doing from DRED isn't that far from what you're suggesting. The difference is that we keep more information about the voice/intonation and we don't need the latency that would otherwise be added by an ASR. In the end, the output is still synthesized from higher-level, efficiently compressed information.

travisporter 2 years ago

Very cool. seems like they addressed the problem of hallucination. would be interesting to see an example of it hallucinating without redundancy and corrected with redundancy

  • CharlesW 2 years ago

    Isn't packet loss concealment (PLC) a form of hallucination? Not saying it's bad, just that it's still Making Shit Up™ in a statistically-credible way.

    • jmvalin 2 years ago

      Well, there's different ways to make things up. We decided against using a pure generative model to avoid making up phoneme or words. Instead, we predict the expected acoustic features (using a regression loss), which means that model is able to continue a vowel. If unsure it'll just pick the "middle point", which won't be something recognizable as a new word. That's in line with how traditional PLCs work. It just sounds better. The only generative part is the vocoder that reconstructs the waveform, but it's constrained to match the predicted spectrum so it can't hallucinate either.

    • derf_ 2 years ago

      The PLC intentionally fades off after around 100 ms so as not to cause misleading hallucinations. It is really just about filling small gaps.

    • skybrian 2 years ago

      In a broader context, though, this happens all the time. You’d be surprised what people mishear in noisy conditions. (Or if they’re hard of hearing.) The only thing for it is to ask them to repeat back what they heard, when it matters.

      It might be an interesting test to compare what people mishear with and without this kind of compensation.

      • jmvalin 2 years ago

        As part of the packet loss challenge, there was an ASR word accuracy evaluation to see how PLC impacted intelligibility. See https://www.microsoft.com/en-us/research/academic-program/au...

        The good news is that we were able to improve intelligibility slightly compared with filling with zeros (it's also a lot less annoying to listen to). The bad news is that you can only do so much with PLC, which is why we then pursued the Deep Redundancy (DRED) idea.

      • tialaramex 2 years ago

        Right, this is why the Proper radio calls for a lot of systems have mandatory read back steps, so that we're sure two humans have achieved a shared understanding regardless of how sure they are of what they heard. It not only matters whether you heard correctly, it also matters whether you understood correctly.

        e.g. train driver asks for an "Up Fast" block. His train is sat on Down Fast, the Up Fast is adjacent, so then he can walk on the (now safe) railway track and inspect his train at track level, which is exactly what he, knowing the fault he's investigating, was taught to do.

        Signaller hears "Up Fast" but thinks duh, stupid train driver forgot he's on Down Fast. He doesn't need a block, the signalling system knows the train is in the way and won't let the signaller route trains on that section. So the Up Fast line isn't made safe.

        If they leave the call here, both think they've achieved understanding but actually there is no shared understanding and that's a safety critical mistake.

        If they follow a read-back procedure they discover the mistake. "So I have my Up Fast block?" "You're stopped on Down Fast, you don't need an Up Fast block". "I know that, I need Up Fast. I want to walk along the track!" "Oh! I see now, I am filling out the paperwork for you to take Up Fast". Both humans now understand what's going on correctly.

    • a_wild_dandan 2 years ago

      To borrow from Joscha Bach: if you like the output, it's called creativity. If you don't, it's called a hallucination.

      • Aachen 2 years ago

        That sounds funny, but is it true? Certainly there's a bias that goes towards what you're quoting, but would you otherwise genuinely call the computer creative? Is that a positive aspect of a speech codec or of an information source?

        Creative is when you ask a neural net to create a poem, or something else from "scratch" (meant to be unique). Hallucination is when you didn't ask it to make its answer up but to recite or rephrase things it has directly observed

        That's my layman's understanding anyway, let me know if you agree

        • skybrian 2 years ago

          That's almost the same. You could say it's being creative by not following directions.

          Creativity isn't well-defined. If you generate things at random, they are all unique. If you then filter them to remove all the bad output, the result could be just as "creative" as anything someone could write. (In principle. In practice, it's not that easy.)

          And that's how evolution works. Many organisms have very "creative" designs. Filtering at scale, over a long enough period of time, is very powerful.

          Generative models are sort of like that in that they often use a random number generator as input, and they could generate thousands of possible outputs. So it's not clear why this couldn't be just as creative as anything else, in principle.

          The filtering step is often not that good, though. Sometimes it's done manually, and we call that cherry-picking.

      • Dylan16807 2 years ago

        Doesn't the context affect things much more than whether you like the particular results?

        Either way, "creativity" in the playback of my voice call is just as bad.

      • CharlesW 2 years ago

        I love that, what's it from? (My Google-fu failed.) Unexpected responses are often a joy when using AI in a creative context. https://www.cell.com/trends/neurosciences/abstract/S0166-223...

    • Sonic656 2 years ago

      There something darkly funny that Opus could act psychotic because It glitched out or was fed something really complex. But you could argue transparent lossy compression at 80 ~ 320kbps is a controlled Deliriant like hallucination going by how only rare few can tell them apart from Lossless.

h4x0rr 2 years ago

Does this new Opus version close the gap to xHE-AAC, which is (was?) superior at lower bitrates?

Sonic656 2 years ago

Love how Opus 1.5 is now actually transparent at 16kbps for voice and 96kbps is still beats 192kbps MP3. Meanwhile xHE-AAC still feels like It was farted out since It 96 ~ 256kbps area Is legit worse than AAC-LC(Apple, FDK) are at ~160kbps.

brnt 2 years ago

What if there was a profiler or setting that helps to reencode existing lossy formats without introducing too many more artifacts? An sizeable collection runs into the issue, if the don't have (easily accessible) lossless masters.

I'd be very interested if I could move a variety of mp3s, aacs and vorbis to Opus if I knew additional quality loss was minimal.

cedilla 2 years ago

The quality at 80% package loss is incredible. It's straining to listen to but still understandable.

nimish 2 years ago

That 90% loss demo is bonkers. Completely comprehensible after maybe a second.

out_of_protocol 2 years ago

Why the hell opus still not in Bluetooth? Well i know - sweet sweet license fees

(aKKtually, there IS opus codec, supported by pixel phones - google made it for VR/AR stuff. No one uses it, there are about ~1 headphone with opus support )

  • lxgr 2 years ago

    As you already mention, it's already possible to use it. As for why hardware manufacturers don't actually use it, you can thank beautiful initiatives such as this: https://www.opuspool.com/ (previous HN discussion: https://news.ycombinator.com/item?id=33158475).

  • giantrobot 2 years ago

    The BT SIG moves kind of slow and there's a really long tail of devices. Until there's a chip with native Opus support (that's as cheap as ones with AAC etc) you wouldn't get Opus support even if it was in the spec.

    Realistically for most headphones people actually buy AAC (LC and HE) is more than good enough encoding quality for the audio the headphones can produce. Even if Opus was in the spec and Opus-supporting chips were common there would still be a hojillion Bluetooth devices in the wild that wouldn't support it.

    It would be cool to have Opus in A2DP but it would take a BT SIG member that was really in love with it to get it in the profile.

    • out_of_protocol 2 years ago

      They chose to make totally new inferior LC3 codec though.

      Also, on my system (Android phone + BTR5/BTR15 Bluetooth DAC + Sennheiser H600) all options sound realy crappy compared to plain old usb, everything else is the same. LDAC 990kbps is less crappy, by sheer brute force. I suspect it's not only codec but other co-factors as well (like mandatory DSP on phone side)

      • maep 2 years ago

        "Inferior" is relative. The main focus of LC3 was, as the name suggests, complexity.

        This is hearsay: Bluetooth SIG considered Opus but rejected it because it was computationally too expensive. This came out of the hearing aid group, where battery life and complexity are a major restriction.

        So when you compare codecs in this space, the metric you want to look at is quality vs. CPU cycles. In that regard LC3 outperforms many contemporary codecs.

        Regarding sound quality it's simply a matter of setting the appropriate bitrate. So if Opus is transparent at 150 kbps, and LC3 at 250 kbps thats totally acceptable if that gives you more battery life.

        • out_of_protocol 2 years ago

          Regarding complexity, do you have any hard numbers? Can't find anything more than handwaving

          • maep 2 years ago

            I remember seeing published numbers based on instrumented code, but could not find it.

            I did a quick test with the Google implementation (https://github.com/google/liblc3) which is about 2x faster than Opus. To be honest, I expected a bigger difference, though it's just a superfical test.

            A few things that also might be of relevance why they picked one over the other:

              - suitability for DSPs
              - vendor buy-in
              - robustness
              - protocol/framing constraints
              - control
            • out_of_protocol 2 years ago

              Thanks for checking it, appreciated

              - Well, 2x is nothing to write home about.

              - DSP-compatibility probably considered but never surfaced as a reason, so hard to guess investigation results. + Pricing and availability of said DSP modules

              - Robustness - well, that's one of the primary features of opus, battle tested by WebRTC, WhatsApp etc. (including packet loss concealment (PLC), Bit-Rate Redundancy (LBRR) frames)

              - Algorithmic delay for opus is low, much lower than older BT codecs, so that definitely wasn't a deal breaker

              - Ability to make money out of standard is definitely important thing to have

              • maep 2 years ago

                If used in a small device like a hearing aid, a 2x factor can have a significant impact on battery life.

                VoIP in general experiences full packet loss, meaing if a single bit flips the entire packet is dropped. For radio links like Bluetooth it's possible to deal with some bit flips without throwing the entire packat away.

                Until 1.5 Opus PLC was in my opinion it's biggest weakness, compared to other speech codecs like G.711 or G.722. A high compression ratio causes bit flips to be much more destructive.

                As for making moeny, Bluetooth codecs have no license fees.

                • derf_ 2 years ago

                  > For radio links like Bluetooth it's possible to deal with some bit flips without throwing the entire packat away.

                  Opus was intentionally designed so that the most important bits are in the front of the packet, which can be better protected by your modulation scheme (or simple FEC on the first few bits). See slide 46 of https://people.xiph.org/~tterribe/pubs/lca2009/celt.pdf#page... for some early results on the position-dependence of quality loss due to bit errors.

                  It is obviously never going to be as robust as G.711, but it is not hopeless, either.

          • giantrobot 2 years ago

            You can check out Google's version which I assume is bundled in Android: https://github.com/google/liblc3

      • giantrobot 2 years ago

        I've got AirPods and a Beats headset so they both support AAC and to my ear sound great. Keep in mind I went to a lot of concerts in my 20s without earplugs so my hearing isn't necessarily the greatest anymore.

        AFAIK Android's AAC quality isn't that great so aptX and LDAC are the only real high quality options for Android and headphones. It's a shame as a lot of streaming is actually AAC bitstreams and can be passed directly through to headphones with no intermediate lossy re-encode.

        Like I said though, to get Opus support in A2DP a BT SIG member would really have to be in love with it. Qualcomm and Sony have put forward aptX and LDAC respectively in order to get licensing money on decoders. Since no one is going to get Opus royalties there's not much incentive for anyone to push for its inclusion in A2DP.

  • dogma1138 2 years ago

    Opus isn’t patent free, and what’s worse it’s not particularly clear who owns what. The biggest patent pool is currently OpusPool but it’s not the only one.

    https://www.opuspool.com/

    • pgeorgi 2 years ago

      No codec (or any other technical development, really - edit: except for 20+ years old stuff, and only if you don't add any, even "obvious" improvements) is known patent free, or clear on "who owns what."

      Folks set up pools all the time, but somehow they never offer indemnification for completeness of the pool - because they can't.

      See https://en.kangxin.com/html/2/218/219/220/11565.html for a few examples how the patent pool extortion scheme already went wrong in the past.

      • xoa 2 years ago

        FWIW, submarine patents are long dead, so it is possible to feel assured that old enough stuff is patent free. Of course that denies a lot of important improvements, but due to diminishing returns and the ramp of tech development it's still ever more significant. A lot of key stuff is going to lose monopoly lock this decade.

        • pgeorgi 2 years ago

          You're right. I could still amend the post, so I added the 20+ years caveat. Thanks!

      • dogma1138 2 years ago

        No one said that Opus is the only one suffering from licensing ambiguity, but comparing it to say AptX and its variants which do have a clear one stop shop for licensing (Qualcomm) it’s a much riskier venture especially when it comes to hardware.

        • pgeorgi 2 years ago

          A drive-by patent owner can show up on anything, and if they don't want to license to you, your entire product is bust.

          Even if it's AptX and Qualcomm issues you a license in exchange for money. I wouldn't even bet on being able to claw back these license costs after being ordered to destroy your AptX-equipped product after it ran into somebody else's patent.

          The risk that this happens is _exactly_ the same for Opus or AptX.

        • tux3 2 years ago

          Making a patent troll is just a matter of putting up a press release and a web page.

          I could claim to have a long list of patents against AptX. Anyone could.

          Of course I'm not willing to disclose the list of patents at this time, but customers looking to be extorted may contact me privately.

          • rockdoe 2 years ago

            FhG and Dolby did eventually put up a list of patents you are licensing from them.

            It makes for some funny reading if you're familiar with the field. (This should not be construed as legal advice as to the validity of the pool)

          • bdowling 2 years ago

            At least in the U.S., anyone can look up all the patents a person/entity owns. So, your fraud wouldn’t get very far.

            https://assignment.uspto.gov/patent/index.html#/patent/searc...

            • pgeorgi 2 years ago

              "I represent the holders of the patents in question" is simple enough. I wonder if it's fraud, if all you're putting out is an unverifiable claim on the net. The pool operators do that all the time.

              • bdowling 2 years ago

                Someone who has no basis to bring a patent infringement claim selling a settlement of such a claim to an alleged infringer is clearly fraud.

                It’s like someone selling a deed to land they don’t own, or leasing a property they don’t own to a tenant.

                • pgeorgi 2 years ago

                  The point isn't to sell a settlement. It's to publish "oh, we're totally serious that there are patents in our control. We won't tell you which ones, we don't tell you what we want from you. If you're interested, reach out to us."

                  Few people will even bother to try, and if so, you keep them at a distance with some random bullshit (communication can break down _sooo_ easily), but it certainly poisons the well by adding a layer of FUD to the tech you're targeting with your claims.

                  Standard fare of patent pool operators, and it's high time to reciprocate.

    • rockdoe 2 years ago

      Opus isn’t patent free

      The existence of a patent pool does not mean there are valid patent claims against it. But yes, you may be technically correct by saying "patent free" rather than "not infringing on any valid patents". That said historically Opus has had claims against it by patents that looked valid but upon closer investigation didn't actually cover what the codec does.

      Just looks like FUD to me. Meanwhile, the patent pools of competing technologies definitely still don't offer indemnification they cover all patents, but have no problem paying a bunch of people to spew exactly this kind of FUD - they're the ones who tried to set up this "patent pool" to begin with!

brcmthrowaway 2 years ago

This is game changing. When will H265 get a DL upgrade?

aredox 2 years ago

>That's why most codecs have packet loss concealment (PLC) that can fill in for missing packets with plausible audio that just extrapolates what was being said and avoids leaving a hole in the audio

...How far can ML PLC "hallucinate" audio? A sound , a syllable, a whole word, half a sentence?

Can I trust anymore what I hear?

  • jmvalin 2 years ago

    What the PLC does is (vaguely) equivalent to momentarily freezing the image rather than showing a blank screen when packets are lost. If you're in the middle of a vowel, it'll continue the vowel (trying to follow the right energy) for about 100 ms before fading out. It's explicitly designed not to make up anything you didn't say -- for obvious reasons.

  • samus 2 years ago

    You never can when lossy compression is involved. It is commonly considered good practice to verify that the communication partner understood what was said, e.g., by restating, summarizing, asking for clarification, follow-up questions etc.

  • xyproto 2 years ago

    It can already fill in all gaps and create all sorts of audio, but it may sound muddy and metallic. Give it a year, and then you can't trust what you hear anymore. Checking sources is a good idea in either case.

m3kw9 2 years ago

Some people hyping it as AGI on social media

  • samus 2 years ago

    Sadly, I see it even on forums where one might think people have background in technology...

    • m3kw9 2 years ago

      The tech background is likely IT and not AI. They used ChatGPT and they thought it’s conscious

WithinReason 2 years ago

Someone should add an ML decoder to JPEG

  • viraptor 2 years ago

    You can't do that much on the decoding side (apart from the equivalent of passing the normally decoded result through a low percent img2img ML)

    But the encoders are already there: https://medium.com/@migel_95925/supercharging-jpeg-with-mach... https://compression.ai/

    • WithinReason 2 years ago

      You can more accurately invert the quantisation step

      • viraptor 2 years ago

        You can't do it more accurately. You can make up expected details which aren't encoded in the file. But that's explicitly less accurate.

        • adgjlsfhk1 2 years ago

          If the encoders know what model the decoders will be running, they can improve accuracy. You could pretty easily make a codec that doesn't encode high resolution detail if the decoder NN will interpolate it correctly.

          • viraptor 2 years ago

            That's changing the encoder and sure, you could do that. But that's basically a new version of the format. It's not the JPEG we're using anymore + ML in decoder. It's JPEG-ML on both the encoder and decoder side. And with the speed that we adopt new image formats... That's going to take ages :(

          • samus 2 years ago

            That makes sense if the goal is lossless compression. Since JPEG is lossy, it is sufficient to consider the Pareto front between quality, compressed size, and encoding/decoding performance.

mikae1 2 years ago

They’ll have my upvote just for writing ML instead AI. Seriously, this is very exciting developments for audio compression.

  • wilg 2 years ago

    This is something you really shouldn’t spend any cycles worrying about.

    • sergiotapia 2 years ago

      I'd just like to interject for a moment. What you're referring to as AI, is in fact, Machine Learning, or as I've recently taken to calling it, Machine Learning plus Traditional AI methods.

      • wilg 2 years ago

        My point is very clearly that you should not spend any time or energy thinking about about the terminology.

  • claudiojulio 2 years ago

    Machine Learning is Artificial Intelligence. Just look at Wikipedia: https://en.wikipedia.org/wiki/Artificial_intelligence

    • declaredapple 2 years ago

      Many people are annoyed by the recent influx of calling everything "AI".

      Machine learning, statistical models, procedural generation, literally an usage of heuristics are all being called "AI" nowadays which obfuscates the "boring" nature in favor of "exciting buzzword"

      Selecting the quality of a video based on your download speed? That's "AI" now.

      • mook 2 years ago

        On the other hand, it means that you can assume anything mentioning AI is overhyped and probably isn't as great as they claim. That can be slightly useful at times.

      • sitzkrieg 2 years ago

        im quite tired of this. every snake oil shop now calls any algorithm "a i" to sound hip and sophisticated

      • mikae1 2 years ago

        > Many people are annoyed by the recent influx of calling everything "AI".

        Yes, that was the reason for my comment. :)

    • samus 2 years ago

      So are compilers and interpreters. The terminology changes, but since we still don't have a general, systematic, and precise definition of what "intelligence" means, the term AI is and always was ill-founded and a buzzword for investors. Sometimes, people get disillusioned, and that's how you get AI winters.

    • xcv123 2 years ago

      Machine Learning is a subset of AI

p1esk 2 years ago

Two inrelated “Opus” releases today, and both use ML. The other one is a new model from Anthropic.

behnamoh 2 years ago

Isn't it a strange coincidence that this shows up on HN while Claude Opus is also announced today and is on HN front page? I mean, what are the odds of seeing the word "Opus" twice in a day on one internet page?

  • mattnewton 2 years ago

    Not that strange when you consider what “opus” means- product of work, with the connotation of being large and artistically important. It’s Latin, so it’s friendly phonemes to speakers of Romance languages and very scientific-and-important-sounding to English speaking ears. Basically the most generic name you can give your fine “work” in the western world.

    • behnamoh 2 years ago

      Thanks for the definition. I like the word! I just haven't come across it in a long time, and seeing it twice on HN frontpage is bizarre!

      • stevage 2 years ago

        It's funny, I was expecting the article to be about the Opus music font and was trying to figure out how ML could be involved.

  • declaredapple 2 years ago

    Well it was released today

    Very likely a coincidence.

    https://opus-codec.org/release/stable/2024/03/04/libopus-1_5...

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection