Measuring the performance of audio devices and systems
When sound is represented by an electrical signal, it can be changed in many ways. Audio devices (such as mixing consoles and tape recorders) and entire audio systems can make useful changes to the signal, but they can also introduce undesirable changes such as distortion and noise. Audio engineers judge the performance of audio devices and systems by measuring three major groups of parameters: linear distortions, nonlinear distortions and noise. This article discusses these three groups of parameters and the methods for measuring them. A future article will discuss dynamic range with reference to analog and digital systems.
Figure 1. Equipment setup for measuring frequency response.
Changes to electrical-signal waveforms that are independent of signal amplitudes are called linear distortions. (It is assumed here that the amplitude of the electrical signal does not exceed the clipping level of the device under test.) There are two major types of linear distortion encountered in practice: non-uniform frequency response and non-uniform phase response. Non-uniform frequency response is measured via the amplitude vs. frequency response test. Non-uniform phase response is measured via the phase vs. frequency response test. These tests are described below. Figure 1 shows the typical equipment setup for such frequency-response measurements.
Amplitude vs. frequency response test: Amplitude vs. frequency response is defined as the peak-to-peak variation, over a specified frequency range, of the measured amplitude of an audio signal, expressed in dB with reference to the signal level at a specified frequency. The reference frequency is usually 1 kHz. The input port of the device under test (DUT) is fed a signal of 1 kHz at the standard operating level (SOL). This would be +8 dBu or +4 dBu for high-level inputs and, typically, -60 dBu or -70 dBu for microphone inputs. The gain(s) of the DUT are adjusted to obtain SOL (+8 dBu or +4dBu) at the output. The audio analyzer is calibrated to read 0 dB at the reference frequency. The input signal frequency is varied in discrete steps, or continuously, and readings in dB, with reference to 0 dB, are taken at specific frequencies. The measured frequency range is usually 20 Hz to 20 kHz.
Phase vs. frequency response test: Phase vs. frequency response is defined as the variable phase shift occurring in a system at several frequencies within a given band. The input of the DUT is fed a signal of variable frequency. A calibrated phase meter is connected at the DUT's output. A plot of phase vs. frequency is carried out over the frequency band of interest.
Figure 2. Total harmonic distortion test spectra and equipment setup.
Nonlinear distortions of an electrical signal are caused by deviations in the linear relationship between the input and the output of a given audio component or system. There are two types of nonlinear distortions encountered in practice: harmonic distortion and intermodulation distortion.
Harmonic distortion: Harmonic distortion occurs when a system, whose input is fed with a pure sine-wave signal of frequency f, produces at its output a signal of frequency f as well as a set of signals whose frequencies (2f, 3f, …nf) are harmonically related to the input frequency. The distortion factor of a signal is the ratio of the total RMS voltage of all harmonics to the total RMS voltage. The performance of audio amplifying devices is expressed as a percentage of total harmonic distortion (THD) at a specified output level. For professional studio-quality equipment, the output level at which THD is measured is 10 dB above SOL (+18 dBu or +14 dBu). The percentage of THD is the distortion factor multiplied by 100. The mathematical expression for THD is:
THD=√ (E2f2 + E3f2 + … + Enf2)/√ (Ef2 + E2f2 + E3f2 … + Enf2) × 100
THD = percentage of total harmonic distortion
Ef = amplitude of fundamental voltage
E2f = amplitude of second harmonic
Enf = amplitude of nth harmonic voltage
The measurement bandwidth is usually limited to the upper limit of human hearing: 20kHz. Figure 2 shows the typical setup for THD measurements. To measure the THD, the audio analyzer removes the fundamental (first harmonic) component of the distorted signal present at the output of the DUT and measures all the remaining energy, including noise and harmonics. Since noise contributes to the measured results, a more accurate name for this measurement is total harmonic distortion and noise (THD+N). The tests are carried out at several frequencies, such as 50 Hz, 100 Hz, 1 kHz, 5 kHz, 7.5 kHz and 10 kHz. Any tests carried out at frequencies above 10 kHz would generate harmonics above 20 kHz — above the limit of human hearing — and would be irrelevant.
Figure 3. Intermodulation distortion test spectra and equipment setup.
Intermodulation distortion: Figure 3 shows the typical setup for intermodulation distortion (IMD) measurements. IMD occurs when a system whose input is fed with two signals of frequencies f1 and f2 generates at its output, in addition to the signals at the input frequencies, signals having frequencies equal to sums and differences of integer multiples of the input frequencies. The SMPTE IMD test specifies the use of a test signal consisting of two separate frequencies (f1 = 60 Hz and f2 = 7 kHz) with a respective amplitude ratio of 4:1. The IMD causes the 60 Hz signal to modulate 7 kHz “carrier.” This results in the generation of sidebands above and below the 7 kHz carrier with components at 60 Hz and its harmonics. IMD is computed as:
IMD = demodulated signal/Ef2 × 100
IMD = intermodulation distortion Ef2 = amplitude of the 7 kHz component
Audio signals are affected by noise. Noise is best described as an unwanted disturbance superimposed on a useful signal. Noise level is usually expressed in dB relative to a reference value, and is commonly referred to as signal-to-noise ratio (SNR). In professional studio equipment, the reference level for SNR measurements is MOL, or 10 dB above SOL.
Random noise: The main source of random noise is the thermal agitation of electrons. Given R, the resistive component of an impedance Z, the mean square value of the thermal noise voltage is given by:
En2 = 4kTBR
En = RMS noise voltage
k = Boltzmann's constant (1.27 × 10-23 joules/Kelvin)
T = absolute temperature in degrees Kelvin
B = bandwidth in Hz
T is usually assigned a value such that 1.38T = 400, corresponding to about 17°C. The SNR at the output of a system depends on the noise generated by the resistive component of the signal source, for example the microphone, and the noise generated by the earliest amplifier stage in the chain. Assuming B = 20 kHz and a microphone with a resistive component R = 150 Ω, En = 0.219 μV. This is the theoretical thermal noise of the microphone input circuit. The microphone pre-amplifier contributes its own random noise, which considerably reduces the SNR of the system. The situation can be visualized as having an ideal noiseless amplifier whose input is fed by a noise generator. This fictitious noise is called the equivalent input noise (EIN) of the amplifier. The difference between the equivalent input noise and the calculated theoretical thermal noise level of the audio signal source is called the noise factor of the amplifier.
Figure 4. Two-step equipment setup for measuring signal-to-noise ratio.
Measuring SNR is a rather involved procedure, and the accuracy of the results depends on strict adherence to a set of rules. The following routine test procedure is suitable for measuring the SNR of an audio mixer:
Step 1: Disable all inputs except the one in the measurement path. Disable all compressors and equalizers. Feed a 1 kHz audio signal at the rated input level (e.g. -70 dBu) to the microphone input and adjust input sensitivity, channel gain and master gain for SOL at the output (+8 dBu or +4 dBu).
Step 2: Remove the input signal source and substitute with a low-noise 150 Ω resistor. Measure the noise at the output with the audio analyzer in dBu over a 20 kHz bandwidth. An optional noise-weighting network may be used to simulate the frequency response of the human ear.
The SNR is given by the difference, in dB, between MOL in dBu and the measured noise in dBu. The use of a weighting network will produce SNR values that may differ by 10 dB or more from flat 20 kHz bandwidth measurements.
Periodic noise: This type of noise is generated outside the device and coupled in some manner into it. Unlike random noise, periodic noise can be completely eliminated by good engineering practice. The main type of periodic noise, commonly called hum, is at 60 Hz and its harmonics. The method for measuring the signal-to-periodic-noise ratio is similar to that for measuring the signal-to-random-noise ratio, except that a spectrum analyzer or oscilloscope may be added to help identify the frequency of the periodic noise.
Crosstalk: Crosstalk is defined as the injection of an unwanted signal from a neighboring circuit via a mutual impedance. An example is the crosstalk that can occur between signal sources in an audio mixer. The measurement of crosstalk is quite involved. It consists of feeding a signal to the unwanted (crosstalking) input and measuring its effect at the wanted path, whose input is loaded with its characteristic source impedance. The two paths have to be adjusted for normal operating conditions. The audio analyzer is connected to the wanted path output, and the input of the crosstalking path is fed with a constant amplitude signal whose frequency is varied in discrete steps or continuously in the bandwidth of interest. The signal-to-crosstalk ratio is expressed in dB with reference to MOL.
Michael Robin, former engineer with the Canadian Broadcasting Corp.'s engineering headquarters, is an independent broadcast consultant located in Montreal, Canada. He is co-author of Digital Television Fundamentals, published by McGraw-Hill.
Send questions and comments to: firstname.lastname@example.org