Home / Definitions / Nyquist’s Law

Nyquist’s Law

Webopedia Staff
Last Updated May 24, 2021 7:50 am

Before sound as acoustic energy can be manipulated on a computer, it must first be converted to electrical energy (using a transducer such as a microphone) and then transformed through an analog-to-digital converter into a digital representation. This is all accomplished by sampling the continuous input waveform a certain number of times per second.

The more often a wave is sampled the more accurate the digital representation. Nyquist’s Law, named in 1933 after scientist Harry Nyquist, states that a sound must be sampled at least twice its highest analog frequency in order to extract all of the information from the bandwidth and accurately represent the original acoustic energy. Sampling at slightly more than twice the frequency will make up for imprecisions in filters and other components used for the conversion.

For example, human hearing ranges from 20Hz to 20,000Hz, so to imprint sound to a CD, the frequency must be sampled at a rate of 40,000Hz to reproduce the 20,000Hz signal. The CD standard is to sample 44,100 times per second, or 44.1 kHz.

Nyquist’s Law is also called Nyquist’s Theorem.