Time domain is eminently measurable, and, indeed, measured as a direct consequence of frequency response measurements in essentially every software that does measurements. This is because the default paradigm for measuring headphones is a fourier transform of an impulse response, which gives us both the magnitude and phase values as a function of frequency.
The video metaphor is quite misleading because our eyes are capable of detecting multiple inputs at once, whereas our ears are pressure detectors - there are no “hearing pixels”, just a set of bandpasses that come after the sum of sound pressure in our ears move the drum. That is, there’s only one variable we’re looking at (the level of displacement at the eardrum at any given time), whereas with our eyes, we have intensity across multiple points.
The reason that time domain measurements of headphones, amplifiers, so on are not discussed is that there simply is no ‘there’ there - headphones and amplifiers can be accurately approximated as minimum phase systems within their intended operating bandwidth and level, and the only cases where this isn’t true of DACs is when it’s intentional (the use of linear phase filters for reconstruction, for example). This being the case, we can infer the time domain behavior directly from the frequency response behavior.
“timing” is also an issue where you really need to look at the source material - a bandwidth limited system (like a recording microphone, preamp, and ADC) can only produce a “transient” change at a given speed, which is given by the frequency response of the system. A faster rise time requires, symmetrically, a larger bandwidth (at high frequencies, specifically). This is why you see - or saw - people measuring amplifiers with square waves and other “instantaneous” rise time signals. But if you feed those through the lowpass inherent to your ADC, or for that matter the microphone used for the recording itself, you’ll find that your transient is slowed, because those systems have a high frequency cutoff.
Time domain is eminently measurable, and, indeed, measured as a direct consequence of frequency response measurements in essentially every software that does measurements. This is because the default paradigm for measuring headphones is a fourier transform of an impulse response, which gives us both the magnitude and phase values as a function of frequency.
The video metaphor is quite misleading because our eyes are capable of detecting multiple inputs at once, whereas our ears are pressure detectors - there are no “hearing pixels”, just a set of bandpasses that come after the sum of sound pressure in our ears move the drum. That is, there’s only one variable we’re looking at (the level of displacement at the eardrum at any given time), whereas with our eyes, we have intensity across multiple points.
The reason that time domain measurements of headphones, amplifiers, so on are not discussed is that there simply is no ‘there’ there - headphones and amplifiers can be accurately approximated as minimum phase systems within their intended operating bandwidth and level, and the only cases where this isn’t true of DACs is when it’s intentional (the use of linear phase filters for reconstruction, for example). This being the case, we can infer the time domain behavior directly from the frequency response behavior.
“timing” is also an issue where you really need to look at the source material - a bandwidth limited system (like a recording microphone, preamp, and ADC) can only produce a “transient” change at a given speed, which is given by the frequency response of the system. A faster rise time requires, symmetrically, a larger bandwidth (at high frequencies, specifically). This is why you see - or saw - people measuring amplifiers with square waves and other “instantaneous” rise time signals. But if you feed those through the lowpass inherent to your ADC, or for that matter the microphone used for the recording itself, you’ll find that your transient is slowed, because those systems have a high frequency cutoff.