(ham´ing kōd) (n.) In digital data transmissions, a method of error detection and correction in which every string of four bits is replaced with a string of seven bits. The last three added bits are parity-checking bits that the receiving device uses to check for and correct any errors.
Hamming code will detect any double errors but can only correct a single error. This method of error correction is best suited for situations in which randomly occurring errors are likely, not for errors that come in bursts.
Richard Hamming, a theorist with Bell Telephone Laboratories in the 1940s, developed the Hamming code method of error correction in 1949.
Featured Partners Sponsored
- Increase worker productivity, enhance data security, and enjoy greater energy savings. Find out how. Download the “Ultimate Desktop Simplicity Kit” now.»
- Find out which 10 hardware additions will help you maintain excellent service and outstanding security for you and your customers. »
- Server virtualization is growing in popularity, but the technology for securing it lags. To protect your virtual network.»
- Before you implement a private cloud, find out what you need to know about automated delivery, virtual sprawl, and more. »