**Error detection and correction **has great practical importance in maintaining data (information) integrity across noisy Communication Networks channels and less than reliable storage media.

**Error Correction : **Send additional information so incorrect data can be corrected and accepted. Error correction is the additional ability to reconstruct the original, error-free data.

There are two basic ways to design the channel code and protocol for an error correcting system:

• **Automatic Repeat-Request (ARQ) : **The transmitter sends the data and also an error detection code, which the receiver uses to check for errors, and request re-transmission of erroneous data. In many cases, the request is implicit; the receiver sends an acknowledgement (ACK) of correctly received data, and the transmitter re-sends anything not acknowledged within a reasonable period of time.

• **Forward Error Correction (FEC) : **The transmitter encodes the data with an error-correcting code (ECC) and sends the coded message. The receiver never sends any messages back to the transmitter. The receiver decodes what it receives into the “most likely” data. The codes are designed so that it would take an “unreasonable” amount of noise to trick the receiver into misinterpreting the data.

**Error Detection : **Send additional information so incorrect data can be detected and rejected. Error detection is the ability to detect the presence of errors caused by noise or other impairments during transmission from the transmitter to the receiver.

**Error Detection Schemes : **In telecommunication, a redundancy check is extra data added to a message for the purposes of error detection. Several schemes exist to achieve error detection, and are generally quite simple. All error detection codes transmit more bits than were in the original data. Most codes are “systematic”: the transmitter sends a fixed number of original data bits, followed by fixed number of check bits usually referred to as redundancy which are derived from the data bits by some deterministic algorithm.

The receiver applies the same algorithm to the received data bits and compares its output to the received check bits; if the values do not match, an error has occurred at some point during the transmission. In a system that uses a “non-systematic” code, such as some raptor codes, data bits are transformed into at least as many code bits, and the transmitter sends only the code bits.

**Repetition Schemes : **Variations on this theme exist. Given a stream of data that is to be sent, the data is broken up into blocks of bits, and in sending, each block is sent some predetermined number of times. For example, if we want to send “1011”, we may repeat this block three times each. Suppose we send “1011 1011 1011”, and this is received as “1010 1011 1011”.

As one group is not the same as the other two, we can determine that an error has occurred. This scheme is not very efficient, and can be susceptible to problems if the error occurs in exactly the same place for each group e.g. “1010 1010 1010” in the example above will be detected as correct in this scheme. The scheme however is extremely simple, and is in fact used in some transmissions of numbers stations.

**Parity Schemes : **A parity bit is an error detection mechanism . A *parity bit *is an extra bit transmitted with a data item, chose to give the resulting bit seven or odd parity. *Parity *refers to the number of bits set to 1 in the data item. There are 2 types of parity

– an even number of bits are 1 Even parity – data: 10010001, parity bit 1*Even parity*– an odd number of bits are 1 Odd parity – data: 10010111, parity bit 0*Odd parity*

The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a “parity bit” is set (or cleared) if the number of one bits is odd (or even). This scheme is called even parity; odd parity can also be used. There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data is corrupt. For example

- Original data and parity: 10010001+1 (even parity)
- Incorrect data: 10110011+1 (even parity!)

Parity usually used to catch one-bit errors

**Checksum : **A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message. On the receiver side, a new checksum may be calculated, from the extended message.

If the new checksum is not 0, error is detected.Checksum schemes include parity bits, check digits, and longitudinal redundancy check. Suppose we have a fairly long message, which can reasonably be divided into shorter words (a 128 byte message, for instance). We can introduce an accumulator with the same width as a word (one byte, for instance), and as each word comes in, add it to the accumulator.

When the last word has been added, the contents of the accumulator are appended to the message (as a 129th byte, in this case). The added word is called a *checksum*. Now, the receiver performs the same operation, and checks the checksum. If the checksums agree, we assume the message was sent without error.

**Hamming Distance Based Checks : **If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives n+d+1 bit word that doesn’t match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an errored word. Even more, d or fewer errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly.

Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors. The *Hamming distance *between two bit strings is the number of bits you have to change to convert one to the other. The basic idea of an error correcting code is to use extra bits to increase the dimensionality of the hypercube, and make sure the Hamming distance between any two valid points is greater than one.

• If the Hamming distance between valid strings is only one, a single bit error results in another valid string. This means we can’t detect an error.

• If it’s two, then changing one bit results in an invalid string, and can be detected as an error. Unfortunately, changing just one more bit can result in another valid string, which means we can’t detect which bit was wrong; so we can detect an error but not correct it.

• If the Hamming distance between valid strings is three, then changing one bit leaves us only one bit away from the original error, but two bits away from any other valid string. This means if we have a one-bit error, we can figure out which bit is the error; but if we have a two-bit error, it looks like one bit from the other direction. So we can have single bit correction, but that’s all.

• Finally, if the Hamming distance is four, then we can correct a single-bit error and detect a double-bit error. This is frequently referred to as a SECDED (Single Error Correct, Double Error Detect) scheme.

**Cyclic Redundancy Checks : **For CRC following some of Peterson & Brown’s notation here . .

is the length of the message we want to send,*k**i.e.,*the number of information bits.

is the total length of the message we will end up sending the information bits followed by the check bits. Peterson and Brown call this a*n**code polynomial*.is the number of check bits. It is also the degree of the generating polynomial. The basic (mathematical) idea is that we’re going to pick the n-k check digits in such a way that the code polynomial is divisible by the generating polynomial. Then we send the data, and at the other end we look to see whether it’s still divisible by the generating polynomial; if it’s not then we know we have an error, if it is, we hope there was no error. The way we calculate a CRC is we establish some predefined n-k+1 bit number P (called the Polynomial, for reasons relating to the fact that modulo-2 arithmetic is a special case of polynomial arithmetic). Now we append n-k 0’s to our message, and divide the result by P using modulo-2 arithmetic. The remainder is called the Frame Check Sequence. Now we ship off the message with the remainder appended in place of the 0’s. The receiver can either recompute the FCS or see if it gets the same answer, or it can just divide the whole message (including the FCS) by P and see if it gets a remainder of 0. As an example, let’s set a 5-bit polynomial of 11001, and compute the CRC of a 16 bit message:*n-k*

11001)10011101010101100000 11001 1010101010101100000 11001 110001010101100000 11001 00011010101100000 11001 0011101100000 11001 100100000 11001 10110000 11001 1111000 11001 11100 11001 0101

In division don’t bother to keep track of the quotient; we don’t care about the quotient. Our only goal here is to get the remainder (0101), which is the FCS. CRC’s can actually be computed in hardware using a shift register and some number of exclusive-or gates.