Settings

Theme

Nyquist Frequency

en.wikipedia.org

81 points by 1_over_n 3 years ago · 85 comments

Reader

kimburgess 3 years ago

Had a great encounter with this recently!

In an environment I work there's multichannel audio recordings that are archived. The archival recordings all had a perfect 4kHz tone appearing, seemingly out of nowhere. This was happening on every channel, across every room, but only in one building. Nowhere else. Absolutely nothing of the sort showed up on live monitoring. The systems were all the same and yet this behaviour was consistent across all systems only at one location.

The full system was reviewed: from processing, recording, signal distribution, audio capture, and in room. Maybe there was a test gen that had accidentally deployed? Nope. Some odd bug in an echo canceller? Also no. Something weird with interference from lighting or power? Slim chance, but also no. Complete mystery.

When looking for acoustic sources there was an odd little blip on the RTA at 20kHz. This was traced back to a test tone emitted from the fire safety system (ultrasonic signal for continuous monitoring). It's inaudible to most people and will be filtered before any voice-to-text processing so no reason for concern. Anyway 20kHz is nowhere near 4kHz though so the search continued.

The dissimilarly of 20kHz and 4kHz is true, until you consider what happens in a non-bandwidth limited signal. The initial capture was taking place at a 48kHz sampling rate. It turns out the archival was downsampling to 24kHz, without applying an anti-aliasing filter. Without filtering, any frequency content above the Nyquist 'folds' back over the reproducible range. So in this case a clean 24kHz bandwidth signal with a little bit of inaudible ultrasonic background noise was being folded at 12kHz to create a very audible 4kHz tone.

It was essentially a capture the flag for signals nerds and a whole lot of fun to trace.

  • spacechild1 3 years ago

    > It turns out the archival was downsampling to 24kHz

    But... why?

    • InitialLastName 3 years ago

      In situations where you don't need the archival to be at "perfect reproduction" quality (including things like broadcast archives or recordings of voice comms) you can get by with a 12kHz maximum frequency without losing the essentials (especially clarity of voices). Many adults can't hear much past 12kHz anyway and most music and voice content doesn't have content past 10khz. You don't lose much, but you save half your file size by x2 downsampling.

      • Sesse__ 3 years ago

        I'd guess the “why” was “why on earth did they not have an antialiasing filter”, not “why did they downsample”. A good lowpass filter is easy to design, cheap to apply, and protects you from this kind of stuff.

        • InitialLastName 3 years ago

          I was working off the quote, but I can see some reasons that someone would decide not to AA filter. Depending on the context it might be reasonable to assume that the signal is band-limited anyway (talk-oriented radio especially is often low-pass filtered) and it's easy to miss that some point in the system can introduce an (inaudible to most humans) artifact. Those assumptions, along with the desire to avoid complexity (every step in the signal path is an opportunity for failure) could easily tip you to "just downsample".

          I'd also emphasize how little most of the people involved in these systems care about the quality of the archive. If it's good enough to a) confirm there was signal on the channel and b) understand the voices involved, it's good enough to not worry about further.

          • kimburgess 3 years ago

            > I'd also emphasize how little most of the people involved in these systems care about the quality of the archive. If it's good enough to a) confirm there was signal on the channel and b) understand the voices involved, it's good enough to not worry about further.

            This is uncomfortably accurate. I work with the capture side of these system and people in that space care deeply about the integrity of the signal, but have little concern for what it contains. Archival is the inverse: the information content of the signal is what's important, not the signal itself.

EarthIsHome 3 years ago

One misconception that many make regarding the Nyquist frequency is thinking that the sampling rate needs to be twice the highest frequency.

Your sampling should really really be twice the bandwidth.

e.g. your bandwidth is 100 MHz centered at 1 GHz (it needs to actually be bandlimited to 100 MHz**). You do not need to sample at 2.2 GHz. You sample at 200 MSPS (really, you should sample a little more than that, say 210 MSPS, so that the bandwidth of interest doesn't butt up against the Nyquist zone edges.)

  • cushychicken 3 years ago

    The folks who are telling you you’re wrong don’t understand Nyquist’s criterion very well. Curse those undergrad courses for only effectively teaching about Nyquist at baseband frequencies.

    You can sample 100MHz of bandwidth at 1GHz just as you describe at 210MSPS. You’ll get everything in the 950-1050MHz band.

    Trouble is, without an antialiasing filter, you’ll get every other band that’s a multiple of that sampling rate. The Nyquist criterion works at every multiple of the sampling frequency.

    Bandpass filter your analog input appropriately from 950-1050MHz and you’re golden.

    This is the way nearly every commodity Wi-Fi chip downsamples 2.4/5GHz raw RF. Sigma-delta ADCs are cheap, fast, and space efficient for die area using this method.

    • femto 3 years ago

      The most fiendish application of this effect that I've seen is polyphase filtering. I can't remember the details, but at the time I can remember the wonder of understanding (in a lecture by fred harris) how most the logic was running at a low sampling rate yet the input was at a high rate. The mixing was done by aliasing.

      Details here:

      https://www.dsprelated.com/thread/7758/understanding-the-con...

      https://s3.amazonaws.com/embeddedrelated/user/124841/fbmc_bo...

      https://s3.amazonaws.com/embeddedrelated/user/124841/fbmc_ch...

      • Sesse__ 3 years ago

        Polyphase filtering is less crazy than it initially sounds. Conceptually, you can think of it as: I have this signal in frequency f. I want to resample it to frequency (b/a)*f, where a and b are integers. (You can also do polyphase filtering to resample of non-rational or varying ratios, by essentially approximating towards a rational, but let's ignore that for the moment.) a and b can be pretty large if you want, e.g. a=160,b=147 will downsample from 48 kHz to 44100 Hz.

        So what you do to resample a signal (again conceptually), is: 1. Add <a> zeros between every input sample (which repeats the spectrum <a> times), 2. Apply a suitable (long!) FIR lowpass filter so that the signal is bandlimited, 3. Take every <b>-th sample (which doesn't cause any aliasing due to #2).

        Now the core of the polyphase filtering idea: We don't need to actually calculate the FIR filter for the samples we don't want in #3. And most of the input values to the filter will be zero due to #1. So instead of storing all the zeros and stuff, we simply pick out every <a>-th tap of the FIR filter and use that on the input signal directly. But since a and b don't line up perfectly, this means we get a different subset of the FIR filter for every output sample; we have a time-varying filter (or a filterbank, if you want). You get <b> different such filters before you're back where you started.

      • cushychicken 3 years ago

        Implemented a polyphase filter in Verilog once. I learned the hard way that it’s easy to mix in unwanted stuff into your polyphase chain if you’re not careful with your implementation.

  • kayson 3 years ago

    I know what you're getting at, but your statement, as others have pointed out, is incorrect. Your sampling rate always always has to be twice the highest frequency of the signal you are sampling.

    If you are sampling an RF-modulated signal with a center frequency of 1GHz and 100MHz of baseband bandwidth, then yes, you do need to sample at 2.2GHz+. And some applications do exactly that.

    If you're taking the RF signal, mixing it down to baseband, and filtering it to bandlimit, then you have a signal with maximum frequency component of 100MHz, and in that case, yes, your sampling rate can be 200MHz+

    • abstrakraft 3 years ago

      From an information theoretic perspective (which is the perspective Nyquist was originally coming from, though it didn't yet have that name), you don't need to mix the signal down. Assuming it is truly band-limited, you can sample the signal directly at RF, and reproduce it from those samples. Additionally, you will need to modulate the reproduced signal into the original band, which means you need to know where that band is - perhaps this is the detail you're pointing out?

      Another way of looking at it is that sampling inherently does the mixing down to baseband. Although it may not be exactly the baseband you want if the spectrum isn't cleanly symmetric about a multiple of the sample frequency.

      • Sesse__ 3 years ago

        I've worked on ultrasound systems that definitely worked this way, not just in theory but also in practice. Bandpass filter 20–40 kHz, sample directly at 40 kHz (giving 20 kHz bandwidth). No mixer step involved, but your spectrum becomes inverted (e.g. if you do an FFT, a 22 kHz tone will be in the 18 kHz bin, not the 2 kHz bin as you would perhaps expect).

        • abstrakraft 3 years ago

          Aliasing makes more sense (to me, anyway) if you think about the spectrum of complex signals, in which signals of real samples are modeled as the sum of positive and negative frequencies.

          In the sampling operation, all sinusoids are shifted down to the "natural baseband" by adding or subtracting some multiple of the sampling frequency that places the resulting frequency within +/- half of the sampling frequency. So for your example of 22kHz, that real frequency has two components: +22kHz that gets shifted down to -18kHz=22kHz-40kHz, and -22kHz that gets shifted up to +18kHz=-22kHz+40kHz.

          Note that this "natural baseband" is an abstraction of our own invention. You can just as easily think of the spectrum as ranging from 0Hz to the sampling frequency f_s, rather than -f_s/2 to f_s/2. The fact that some prefer one over the other is precisely why fftshift exists.

      • kayson 3 years ago

        To clarify: "band-limited" usually means X(w) = 0 for abs(w) > B for some B, where X is the frequency spectrum. And that's the definition Shannon used in the original proof, which is where the idea of Nyquist Frequency comes from.

        If you add the additional constraint of the signal being "bandpass-limited" where, X(w) = 0 for A > abs(w) > B for some A, B, then yes, you can under sample.

        And that's where the information-theory idea comes in where the amount of information contained in the band only "needs" 2X sampling rate to reconstruct perfectly.

        You can think of aliasing being somewhat orthogonal to that in the sense that you need 2X bandwidth so you don't corrupt the signal, but 2X max frequency so you don't alias anything else into the signal. (I say this realizing that aliasing is what would cause the former signal corruption, hence "somewhat")

        • kayson 3 years ago

          Looks like I bundled that second inequality. Band pass is X(w) = 0 for abs(w) < A or abs(w) > B

    • diydsp 3 years ago

      Actually, GP is correct. See Bandpass Sampling: https://en.wikipedia.org/wiki/Undersampling.

      "In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal.

      When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion."

      • kayson 3 years ago

        Yes, but this only works if, as the page points out, the signal is bandpass filtered, which GP did not mention. It's not true in the general sense, nor is it practical for many (most?) RF systems, especially those with multiple channels.

        • squeaky-clean 3 years ago

          > your bandwidth is 100 MHz centered at 1 GHz

          Implies a bandlimited signal centered around 1ghz.

          • kayson 3 years ago

            I can see why you might think that but consider that in RF systems, while the wanted signal is bandlimited, you also have a lot of unwanted "blockers" all over the spectrum that need to be dealt with before sampling.

    • gct 3 years ago

      I'm afraid you're mistaken (source: worked as DSP engineer for 15 years). Often you apply your filter around the RF frequency you want and then sample at a lower rate. You're right that the signal will get aliased doing that, but the information is always preserved.

      If you sample s.t. your folding frequencies are in an appropriate place, you can fold your desired region into the first nyquist region without needing to mix it down. This is especially desirable if you can avoid having to build an IQ mixer because they're hard to keep balanced.

      The worst case doing this is that your signal spectrum is reversed in frequency, but you can correct that easily digitally.

      • kayson 3 years ago

        I'm afraid I'm not mistaken (source: I design integrated RF transceivers) ;)

        Yes, you can subsample if you have a suitably bandpass-limited signal. But that's not the general case, nor is it what the nyquist-shannon theorem proves, which is where "nyquist frequency" comes from.

        Nyquist frequency by the original definition is 2X highest frequency, though some papers textbooks evidently have started using it to mean 2X bandwidth, enough so that wikipedia[1] actually mentions it.

        In integrated circuits, IQ mixing isn't problematic as we can fairly easily do gain and phase calibration to correct for the mismatch.

        [1] https://en.m.wikipedia.org/wiki/Nyquist_frequency#Other_mean...

        • gct 3 years ago

          You have to have a band limited signal to sample anyways, where it's at in the spectrum doesn't matter. The first thing you'll do before feeding anything to an ADC is running it through a filter to make _sure_ it's band limited. Whether that filter's at DC or some Rf doesn't matter.

          Here's the result from his original paper where he specifically says that it doesn't have to be at DC:

          https://imgur.com/uSywML7

          • kayson 3 years ago

            My point is that practically speaking, it does matter where the signal is, depending on how you filter it. If you lowpass filter an RF- (or, more realistically, IF-) centered signal, you can't just sample it at 2X bandwidth because you'll get aliases from the unwanted content between DC and the bottom frequency edge of the signal.

            It may not be a common scenario anymore, but it was very common in the early GSM days when the signal wasn't mixed to DC but near-DC.

            • gct 3 years ago

              Ah yes you're right that you have to be careful, it'll fold at multiples of the nyquist frequency and you want to make sure your SOI is entirely contained in one of those zones.

        • detaro 3 years ago

          (use \* to escape an * and prevent it being parsed as an italics marker.)

  • YakBizzarro 3 years ago

    That's true, but there are a couple of things more. First, your DAC or ADC need to have such analog bandwidth. Working in a higher Nyquist zone also require higher amplification since the signal would be considerably weaker and more complex filtering to remove the signal from the other zones

  • IIAOPSW 3 years ago

    I'm mentally filling in the gaps here and assuming MSPS is MegaSamplesPerSecond?

  • wittenbunk 3 years ago

    Only true for continuous RF sources.

    For transient signals you need at least Nyquist frequency.

    • azalemeth 3 years ago

      Or use the traditional "lock-in" amplifier technique of mixing with a known reference at the frequency mid-point of the range you care about? (That's how NMR spectrometers / MRI scanners worked for decades

      • muffles 3 years ago

        Isnt the lock-in amplifier technique used to improve the SNR ratio of a signal by filtering out noise at frequencies outside a specific range of interest? High-speed sampling would still be required to accurately measure transient signals.

    • mhh__ 3 years ago

      In that sense isn't the bandwidth 0-Max anyway though?

  • Chinjut 3 years ago

    Consider a signal whose value at x seconds is f(2x) - 2 f(3x) + f(4x), where f(x) = sin(2πx)/x. Considering that the absolute frequencies of f(x) are uniformly distributed from 0 to 1 Hz, the absolute frequencies of this total signal should be constrained to between 2 and 4 Hz. Thus, a bandwidth of 2 Hz. But if we sample at 6 Hz (three times the bandwidth!) including x = 0, we'll get all zeros.

    Granted, we might say that from the perspective of the complex Fourier transform using signed frequencies, the frequencies of this signal actually range over [-4 Hz, -2 Hz] U [+2 Hz, +4 Hz]. But I'm not sure that's the interpretation you had in mind.

    Let me know if I've screwed anything up here!

    • Chinjut 3 years ago

      That is, it's not quite as simple as saying you just need to sample at any frequency at least twice the bandwidth. Rather, it's the more complicated behavior described by this graph: https://en.wikipedia.org/wiki/Undersampling#/media/File:Samp.... That is, the general rule is that the ratio of the highest frequency in the signal to half the sample rate, and the ratio of the lowest frequency in the signal to half the sample rate, have to lie within an interval of consecutive natural numbers.

      When the lowest frequency is zero, this is the familiar rule that the sample rate has to be at least twice the highest frequency in the signal. But more generally, it's more complicated.

    • Chinjut 3 years ago

      Whoops, I should've pulled the division by x out of the definition of f. The example I had in mind was [sin(4πx) - 2 sin(6πx) + sin(8πx)]/x. [Another good example is [sin(6πx) - 2 sin(8πx) + sin(10πx)]/x, whose frequencies are between 3 Hz and 5 Hz, thus a bandwidth of 2 Hz, but sampling at 4 Hz or even 8Hz gets all zeroes.]

      Anyway, the details on that example don't matter, the Wikipedia graph and article makes things more clear.

  • mikepavone 3 years ago

    Is this assuming you have some analog hardware that's demodulating the signal in front of your ADC? How do you demodulate a signal from a 1GHz carrier with 200 MSPS?

    • labcomputer 3 years ago

      As the sibling comment mentioned, you don’t need to demodulate first, because that is actually what the sampling process of your ADC does.

      You can think of it as multiplying the original signal by a comb (in the time domain) of delta functions, which folds everything (in the frequency domain) back into the nyquist frequency of your ADC. Each delta function corresponds to one sample. If your original signal was truly band-limited to 100MHz, then what comes out is a replica of the band limited signal.

      One catch (which is actually fairly easy to do in practice) is that the sampling window needs to correspond to around 1/f of the carrier frequency. This is what YakBizzaro is talking about (ADC analog bandwidth) in their sibling post.

      • mikepavone 3 years ago

        Thanks for the explanation! Between your comment and the Undersampling wiki page diydsp linked to I think I am on the path to enlightenment.

        > If your original signal was truly band-limited to 100MHz

        In practice, this means you need to band pass before the ADC, right? i.e. "signal" in this case is the entire input to the ADC and not just the particular modulated signal you care about

        • labcomputer 3 years ago

          > In practice, this means you need to band pass before the ADC, right? i.e. "signal" in this case is the entire input to the ADC and not just the particular modulated signal you care about

          Right and right.

          And, you’d normally want that to be a contiguous 100 MHz band of frequencies (you could in principle have multiple discontiguous bands that add up to 100 MHz if they are spaced right (they don’t fold down to the same base frequencies), but that would be quite an unusual application).

    • cushychicken 3 years ago

      To quote a meme: “That’s the neat part. You don’t.” If you bandlimit your input, aliasing effectively strips out the carrier tone and leaves the modulated signal.

      In a way, you’re relying on aliasing / frequency folding to do it for you.

      https://ars.els-cdn.com/content/image/3-s2.0-B97801241589310...

      You can even improve information transfer in these scenarios by using a synchronizer, which allows you to phase shift your sampling to be at the ideal transition point in your information stream.

    • klodolph 3 years ago

      No, this assumption is incorrect. You can ADC first and then demodulate afterwards. The spectrum of your high-frequency (near 1 GHz) signal will be aliased at frequencies below the Nyquist frequency, but it’s easy to calculate the original frequency, if you know that the signal is band-limited.

  • paulsutter 3 years ago

    Thank you I came here to post exactly this. Suggestion, you might want to correct the wikipedia page

    • stagger87 3 years ago

      You do not want to "correct" the wiki because the wiki is not wrong. The person you are replying to is clearly thinking about some sort of RF system (given the frequencies mentioned) where it's important to have a baseband filter to eliminate aliasing, and that filter will have some sort of roll off region, resulting in a higher sample rate than available bandwidth. That's all great, but the Nyquist theorem isn't talking about an RF system. It's referring to sampling. When the wiki uses the word "bandwidth", they mean the frequencies that don't alias given a specific sample rate.

    • eternauta3k 3 years ago

      Is the wikipedia page really wrong though? Highest frequency is what the mathematicians care about. EEs care about bandwidth because they're always modulating stuff and thinking in terms of carrier and baseband. Strictly speaking, what the EE grandparent suggested is using aliasing to mix the signal down to baseband.

  • gaze 3 years ago

    Yeah but you also need the bandwidth of the sampler to exceed the highest frequency of the sample. Most samplers are limited by some kind of RC time and not their sinc envelope. Most.

polalavik 3 years ago

If you're interested in learning more about various DSP topics, I run a blog on over at https://signalprocessingjobs.com/ - a signal processing job board and blog!

One of the more popular series is the Journal2Matlab blog about translating academic journal papers into easy to read matlab.

lumb63 3 years ago

Signals and systems was a tough course for me. It was what crushed my 4.0 GPA. Nyquist frequency was a concept I could not wrap my head around. I’ve improved, but it still doesn’t click as I’d like it to.

When I took the course, it made no sense to me that you could sample at twice the frequency of the signal and reconstruct it. Consider a sine wave at 1 Hz. If you sample at 2 Hz, you’d get readings of 0, 1, 0, -1, etc. If you graph that, it’s a perfect triangle wave, not a sine wave! That’s what I couldn’t not get past. I thought you’d need an infinite sampling rate to accurately capture the sine wave.

As I type this out, I’m realizing that a critical component of this that I wasn’t taught (or I didn’t grasp) is the need for the signal to be bandlimited. Returning to my sine example from above, what bothered me was, if I don’t sample more points, how do I know that it’s only a sine wave, and nothing more? That only works if you pretend there are no higher frequencies (or filter them out, though an ideal filter is impossible in practice). If there aren’t higher frequencies, there can’t be anything you “can’t capture” by sampling at the Nyquist frequency.

  • tomjakubowski 3 years ago

    A triangle wave at 1Hz would have many higher frequency components. If you know a priori that the highest frequency of the signal is 1Hz, sampling at 2Hz is enough to infer 0, 1, 0, -1, ... came from a sine wave.

  • jancsika 3 years ago

    I've had an open GSoC project for some years to create a library that makes a handful of these audio misconceptions true. So the student would design an oscillator or oscillator bank where the closer you get to Nyquist, the more some "bad thing" happens to the corresponding output. Morphing into a triangle would be one way to do it.

  • Sesse__ 3 years ago

    What you are saying is generally correct, but: If you sample a 1 Hz sine at 2 Hz, you wouldn't get readings of 0, 1, 0, -1, etc.; you would get readings of 1, -1, 1, -1, etc., or if you're very unlucky, 0, 0, 0, 0, …! The _exact_ case is of Fs/2 is, well, an edge case.

  • rnpk 3 years ago

    You get the original sine wave back from 0, 1, 0, -1 not by plotting it linearly (which gives you the triangle) but by using a sinc interpolation function.

gooseyard 3 years ago

Dan Worrall made a fantastic video which touches on Nyquist. His youtube channel is a tremendous resource: https://www.youtube.com/watch?v=-jCwIsT0X8M

coolandsmartrr 3 years ago

I saw the Nyquist Frequency mentioned in the American Cinematographer Magazine. The article illustrate how detailed patterns, like sweaters, can produce a fuzzy jagged artifact called moire. This is because there is too much information for camera's sensor to interpret and summarize the details into pixels (ie. surpassing the Nyquist Frequency).

Their suggested solutions were to 1) get a wide-angle lens to reduce detail beamed into the sensor 2) use a larger image sensor or 3) remove the object causing moire artifacts.

  • regularfry 3 years ago

    Yep. Strictly speaking what's happening is that the pattern has a higher spatial frequency than the sensor, and the light detection acts as a non-linear interaction which aliases the higher frequencies down into the bandwidth of the sensor.

    A wide-angle lens would change the effective bandwidth of the system, as would a larger sensor: all either would do is change the apparent size of the moire pattern (possibly so it's less annoying).

    What you really want is something that would act as a spatial low-pass filter in front of the sensor; something like a very slightly frosted piece of glass which would prevent any feature size smaller than two sensor pixels from being resolved on the far side. I imagine if that wasn't a completely stupid idea for some other reason that you could buy them.

  • kimburgess 3 years ago

      When a grid’s misaligned
      with another behind
      That’s a moiré…
    
      When the spacing is tight
      And the difference is slight
      That’s a moiré
monkeycantype 3 years ago

The coolest nyquist frequency application I've every come across, if you look up how modulation of nerve impulses works in the optic never you can figure up the fastest rate of blinking your eye can perceive, and it checks out in reality.

xchip 3 years ago

Beware, there are lots of misconceptions in the comments.

abhaynayar 3 years ago

Soothing.

elromulous 3 years ago

To add another misconception, the Nyquist frequency is a lower bound, below which you necessarily get aliasing. It doesn't say anything about whether said sampling rate is sufficient for reconstruction or whatever your intended use is.

E.g. sampling a 1hz signal at 2hz still doesn't tell you if the signal was a 1hz sin or a 1hz sawtooth (depending on how lucky or unlucky you are).

  • ska 3 years ago

    That isn't really what is going on. If the signal doesn't contain any higher frequency information, the Nyquist limit establishes what you need to exactly reconstruct the signal. It is therefore sufficient for any use.

    So your case, a 1hz sin doesn't contain any higher frequencies, and will be reconstructed perfectly. A 1hz sawtooth contains higher frequencies, and so is not.

    I think what you are really getting into is that a signal with periodicity of, say 1hz, does not mean that the Nyquist limit is 1hz. Square waves and sawtooths are particularly obvious examples of this, because the sharp edges cannot be achieved without (very many) high frequency contributions.

    Now you can avoid this by creating a different set of component functions and a different sense of "frequency" but that just pushes the problem around. Also, since you are doing non-standard things you need to explain it, especially if what you are using doens't form a proper basis.

    Finally, of course this is all in the idea mathematical setting, in real world noise etc. also has to be taken into effect.

  • Evidlo 3 years ago

    A 1Hz sawtooth contains frequencies above 1Hz.

    It actually has frequency components that go out to infinity, so its impossible to perfectly reconstruct a sawtooth without knowing beforehand that its a sawtooth.

    This is true for any signal with discontinuities (i.e. not "band-limited").

    • PaulDavisThe1st 3 years ago

      This is incorrect, though subtly, and for several different reasons:

      1) It is completely possible to create a sawtooth wave that contains only a single frequency. However, you could also consider the wave to be an (infinite) sum of sinusoids at different frequencies. Both views are "correct", and which is more appropriate depends on the context.

      2) Related to (1): natural (acoustic) sounds are almost always best considered as a sine series. While there are such sounds which are most easily described as a sawtooth, when you consider the physical/mechanical process by which they are formed, the sine series is a more obvious approach.

      3) A digital 1Hz sinusoid can trivially contain no harmonics at all. However, the moment you attempt to convert this into an acoustic pressure wave, the nature of the physical world essentially guarantees that the acoustic pressure wave will have a series of harmonics going out far beyond the base frequency. Once you start actually moving things (like magnetic coils, speaker cones and air), it's more or less impossible to avoid generating harmonics. But since the original signal was genuinely a pure sine tone, it becomes a little tricky to decide what the correct way to describe this is.

      • stagger87 3 years ago

        At the "textbook"/"theory" level, the person you are replying to is not wrong. A sawtooth waveform has infinite harmonics. If you were going to be nitpicky (which your response was in that spirit), the best thing to have said (IMO) was that the high frequency harmonics are going to drop off and be below any sort of "noise floor" or sensitivity of the system and not matter anyways. Instead you wrote a bunch of stuff about sounds and pressure waves that I don't think had the effect you intended. I think you lost the plot somewhere along the way.

        • PaulDavisThe1st 3 years ago

          > A sawtooth waveform has infinite harmonics

          This is only true if you consider the waveform to be a sine series. As I indicated, this is a perfectly legitimate way to think about a sawtooth (and indeed, it appears to be fundamentally how the human ear works too).

          But a sawtooth waveform is also nothing more than a very sharp rise/drop in air pressure followed by a longer drop/rise, repeated over and over again.

          If you want to synthesize a sawtooth wavefrom with analog equipment, then thinking of it as an (infinite) sine series makes sense, because that's how you will end up approximating the (perfect) sawtooth.

          However, digital synthesis does not require this sort of conception at all, and can be constructed without any summing of a harmonic series.

          Also, I find it assuming that in the comments of a post about nyquist, you would write

          > a bunch of stuff about sounds and pressure waves that I don't think had the effect you intended. I think you lost the plot somewhere along the way.

          What do you the plot is?

          • jameshart 3 years ago

            You can’t physically construct a speaker that makes a sawtooth wave. Its cone would need to change velocities from -n to +n or vice-versa instantaneously in order to generate the ‘teeth’ of your wave. The air particles you are moving would likewise need to instantly accelerate. That is a physical impossibility - these things have mass, accelerating them requires force, infinite acceleration requires infinite force.

            Those physical constraints manifest as limits on the frequency of sinusoidal harmonics it is possible for you to put into the wave; for the medium to carry; and for you to physically detect at the other end.

            Mathematicians don’t break functions down into sinusoidal harmonics because they like trig functions. They do it because they fundamentally are what’s happening.

            • PaulDavisThe1st 3 years ago

              > You can’t physically construct a speaker that makes a sawtooth wave.

              This was my point (3), though you've added an additional set of reasons why it is particularly hard for shapes like a sawtooth.

          • Gordonjcp 3 years ago

            > However, digital synthesis does not require this sort of conception at all, and can be constructed without any summing of a harmonic series.

            Yes, but you also cannot just make something that goes from -1 to 1 and then wraps back to -1 again, in a discrete-time (sampled) world.

            You will get aliasing, because at some point your harmonic series will have partials that are noticeably large and exceed the Nyquist frequency, which will fold back into the output signal's spectrum. And, wouldn't you know it, except for a few very precise frequencies, those aliases will be inharmonic as all hell.

            Here's an example (headphone warning - excessively loud) from a daft idea I had to implement a "virtual analogue" synth on an Arduino. Yes, one of the 8-bit ones, that can't do arithmetic.

            https://raw.githubusercontent.com/ErroneousBosh/slttblep/mas...

            The first sweep is generated with bandlimiting disabled, and you can hear the "swoopy" noises as the aliases slide up and down. The second sweep has some bandlimiting applied by "bending" the points where the signal resets to roughly correspond to a weighted sinc filter, eliminating (most of) the partials above Nyquist.

            It uses 16-bit arithmetic on 8-bit lookup tables, and is output through an 8-bit PWM abused as a DAC, so it's not super clean, but it is at least not grossly incorrect.

            You cannot filter the synthesized partials that go past Nyquist out after the signal has been generated, because the damage has been done.

          • squeaky-clean 3 years ago

            > However, digital synthesis does not require this sort of conception at all, and can be constructed without any summing of a harmonic series.

            A naive sawtooth algorithm (linear rise from -1.0 to 1.0) will create aliasing and not be a true saw. You cannot filter out this aliasing unless you do this with extreme oversampling. Otherwise you need to synthesize the waveform in an alias-free (or alias minimizing) method.

            There's quite a few ways to digitally synthesize a sawtooth, all with some compromise, but they're all based on sine summation theory.

            One of the more common ways is to precalculate a table of single-cycle bandlimited waveforms for every 1/3 octave or so, and choose the nearest table index for a given note-frequency being played, and interpolate as needed. (It's essentially mipmapping).

      • titzer 3 years ago

        > It is completely possible to create a sawtooth wave

        For a loose definition of "wave". All of the math behind information theory and sampling signals assumes waves are sinusoids. It also happens that waves in nature behave like (dampened) sinusoids. It's a completely natural way to model them mathematically when one has no a prior knowledge of the source, which is what the comment above you is pointing out.

        To recognize and then reconstruct a sawtooth with no a priori knowledge, you need to sample much higher than the frequency of the sawtooth. You can compress said information quite well if you have not only wavelets, but sawtooths in your encoding. I am no audio expert but I don't think codecs exploit sawtooths (sawteeth?) for compression because they sound unnatural (because they are).

        Note that even digitally you can't create a perfect sawtooth wave because there is a fundamental quantization of time in digital systems. It's a question of, again, how fast you can alter voltages, i.e. a frequency, so you end up generating a step-like function, inescapably. Yeah, sure, you can switch digital systems at MHz or GHz, but still.

        • ska 3 years ago

          > All of the math behind information theory and sampling signals assumes waves are sinusoids

          This isn't really true. The point about the sinusiods is mostly that the form a very convenient complete basis of a useful space of functions, hence the fourier expansion. This doesn't amount to an assumption about how the signals are generated, rather how they are represented. You could pick a different basis and you'd get a different representation, but as functions they are identical. By definition this applies equally to any signal in the class, however you generate it.

          Where the shape of the underlying basis vectors does show up is in errors and estimation, e.g. the similar estimation error in fourier vs. Haar will show up as sinusoids or steps.

  • Sesse__ 3 years ago

    > To add another misconception, the Nyquist frequency is a lower bound, below which you necessarily get aliasing. It doesn't say anything about whether said sampling rate is sufficient for reconstruction or whatever your intended use is.

    Yes, it does. The Nyquist criterion gives exactly the (minimum) sampling frequency you need for perfect reconstruction of a bandlimited signal.

    > E.g. sampling a 1hz signal at 2hz still doesn't tell you if the signal was a 1hz sin or a 1hz sawtooth (depending on how lucky or unlucky you are).

    A 1 Hz sawtooth is not a bandlimited signal, so the Nyquist theorem does not apply.

  • duped 3 years ago

    > It doesn't say anything about whether said sampling rate is sufficient for reconstruction or whatever your intended use is.

    Formally, the Shannon-Nyquist theorem states that if you sample a band limited signal at twice its bandwidth, an ideal reconstruction filter can be used to perfectly reconstruct the input signal. There's some wiggle room over ideal sampling/filtering, but the point is that it tells you exactly what the input was, provided it was band limited.

    The misconception I think you're having is that band width is not the period of a signal.

  • femto 3 years ago

    If you know the signal is periodic with known frequency/period, you can be clever and sample it at that that frequency +/- a small offset. The frequency spurs then will not fall on top of each other and you can "unwrap" them to give a more complete picture of the signal. In that way you could determine whether a signal with a known frequency of 1Hz is a sawtooth or sine.

    Nyquist more or less says "If I you know nothing about the signal, by sampling at X Hz, you can determine what the signal looks like over a bandwidth of 0 Hz to X/2 Hz". If you have additional knowledge about the signal (eg. band limited, periodic or other) you can exceed those limits.

    It can also be looked at from an information viewpoint. Nyquist says "if you sample a signal at a certain rate you will get a certain amount of new information about it". You might "spend" this information by saying something about the signal over the band DC-f/2, or you might choose to say something about the signal over a different band of frequencies. In the example above we chose to say something about a set of discrete harmonic frequencies over a very wide bandwidth, ignoring the frequencies in between the harmonics as the 1Hz constraint told us they will be zero.

  • GeompMankle 3 years ago

    What is the point of adding a misconception?

    The conception of the theorem is that if the signal being sampled is sufficiently integrable AND bandlimited AND the signal is uniformly sampled at at least the Nyquist rate over all time/space THEN then reconstruction of the bandlimited signal is exactly possible using the sinc interpolator. The proof is covered in "Shannon's original proof" in the Wikipedia article and most books on signal analysis such as Gaskill's Linear System book. Most EE people will have to do the proof as an intro course assignment in the first month of a DSP class.

    OTOH, if you are not able to sample the function over all space or time AND the function happens to be periodic outside the interval you did sample THEN reconstruction of the bandlimited periodic signal is possible using the Dirchlet kernel.

    If you are not able to sample the function overall space (from the first) AND that function is not periodic, you have small problems which occasionally become big problems if you are no careful. Most DSP books have a chapter about windowing discrete data and dealing with this conundrum. Basically, exact reconstruction is not guaranteed and context-specific techniques need to be employed to ensure desirable fidelity.

  • kardos 3 years ago

    A 1hz sawtooth would not be band limited below 2hz

  • TimTheTinker 3 years ago

    A low pass filter at 2hz would filter out the high frequencies contained in a sawtooth waveform, thus rendering a 1hz sine waveform.

    To accurately sample a 1hz sawtooth waveform, you'd have to filter/sample at a much higher frequency.

  • Gordonjcp 3 years ago

    Sampling a 1Hz sawtooth at 2Hz will alias.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection