The application of methods derived from INFORMATION THEORY to the detection and correcting of errors in DIGITAL data streams. Error correction is of the utmost importance in most areas of computing and communications technology. For example: Internet’s TCP protocol provides error detection, CD-ROMS devote around 14% of their total data capacity to redundant error correction information (and music CDS only a little less), and modem speeds above 28 kilobits per second would be impossible over public telephone lines without error correcting PROTOCOLS such as v.90.
All error detection methods involve adding redundant (Le. non-data) bits to each data word, and up to a point the more redundancy added the more errors can be detected and corrected. For example adding a single redundant bit and calculating the PARITY of a message allows the fact that a single bit has changed to be detected, but not to be located for correction. Using more redundant bits allows multiple bit errors to be both detected and corrected. For example a REED-MULLER CODE employed by NASA to send image data from interplanetary probes sends 32 bits for each s-bit PIXEL value, and can detect and correct corruption of up to 7 of those bits. The related REED-SOLOMON CODE provides the redundant bits on CD-ROM and hard disk drives.