Coding Theory (Error Detection and Error Correction)

Error Detection and Correction Techniques

Error Detection and Correction Techniques

Introduction

Error detection and correction are essential in digital communications and storage systems to ensure data integrity. Errors can occur due to noise, interference, or hardware malfunctions.

Basic Concepts

  • Error Detection: Identifying whether errors have occurred in transmitted data
  • Error Correction: Identifying and correcting errors in transmitted data
  • Redundancy: Adding extra bits to detect or correct errors
  • Forward Error Correction (FEC): Ability to correct errors without retransmission

Common Techniques

1. Parity Check

The simplest form of error detection that uses a single parity bit.

Types:

  • Even Parity: Parity bit makes total number of 1s even
  • Odd Parity: Parity bit makes total number of 1s odd
Even Parity Example: Data: 1 0 1 1 0 1s count: 3 (odd) Parity bit: 1 (to make total even) Transmitted: 1 0 1 1 0 1

Mathematical Example:

Original data (4 bits): \(1101\)

Using even parity:

\[ \text{Count of 1s} = 1 + 1 + 0 + 1 = 3 \text{ (odd)} \] \[ \text{Parity bit} = 1 \text{ (to make total even)} \] \[ \text{Codeword} = 11011 \]

At receiver:

\[ \text{Received codeword: } 11011 \] \[ \text{Count of 1s} = 1 + 1 + 0 + 1 + 1 = 4 \text{ (even)} \to \text{No error} \] \[ \text{If received as } 10011: \] \[ \text{Count of 1s} = 1 + 0 + 0 + 1 + 1 = 3 \text{ (odd)} \to \text{Error detected} \]

2. Checksum

A value computed from a data set to detect errors during transmission.

Checksum Process: 1. Divide data into fixed-size segments 2. Sum all segments using one's complement arithmetic 3. Take 1's complement of sum = checksum 4. Append checksum to data 5. Receiver sums all received words; result should be all 1s (or 0)

Mathematical Example:

Data (8-bit words): \(10110010\) and \(11001100\)

\[ \text{Step 1: Sum the words (using one's complement addition)} \] \[ 10110010 + 11001100 = 101111110 \] \[ \text{Step 2: Wrap around the overflow bit} \] \[ 01111110 + 1 = 01111111 \] \[ \text{Step 3: Take 1's complement to get checksum} \] \[ \text{Checksum} = \overline{01111111} = 10000000 \] \[ \text{Step 4: Transmitted data} = 10110010, 11001100, 10000000 \] \[ \text{At receiver:} \] \[ \text{Sum all three words (should result in all 1s if no errors)} \] \[ (10110010 + 11001100) + 10000000 = (01111111) + 10000000 = 11111111 \] \[ \text{1's complement of the final sum} = \overline{11111111} = 00000000 \to \text{Valid} \]

3. Cyclic Redundancy Check (CRC)

A more powerful error-detecting code using binary division.

CRC Process: 1. Agree on a generator polynomial (e.g., \(x^3 + x + 1 = 1011\)) 2. Append n zeros to data (n = degree of polynomial) 3. Divide modified data by generator using XOR 4. Append remainder (CRC) to original data 5. Receiver divides by generator - remainder should be 0

Mathematical Example:

Data: \(1011001\)

Generator: \(1011\) (\(x^3 + x + 1\))

\[ \text{Step 1: Append 3 zeros to the data} \] \[ \text{Modified Data} = 1011001000 \] \[ \text{Step 2: Perform modulo-2 binary division} \] \[ \begin{array}{r} 1001100 \\ 1011\overline{)1011001000} \\ \underline{\oplus 1011 \quad \quad \quad \quad \quad \quad \quad} \\ 0000001000 \\ \underline{\oplus 0000 \quad \quad \quad \quad \quad \quad} \\ \quad 00001000 \\ \quad \underline{\oplus 0000 \quad \quad \quad \quad \quad} \\ \quad \quad 001000 \\ \quad \quad \underline{\oplus 0000 \quad \quad \quad} \\ \quad \quad \quad 100100 \\ \quad \quad \quad \underline{\oplus 1011 \quad} \\ \quad \quad \quad \quad 010100 \\ \quad \quad \quad \quad \underline{\oplus 0000} \\ \quad \quad \quad \quad \quad 10100 \\ \quad \quad \quad \quad \quad \underline{\oplus 1011} \\ \quad \quad \quad \quad \quad \quad 00110 \\ \quad \quad \quad \quad \quad \quad \underline{\oplus 0000} \\ \quad \quad \quad \quad \quad \quad \quad 110 \quad (\text{Remainder}) \end{array} \] \[ \text{CRC} = 110 \] \[ \text{Transmitted data} = 1011001110 \] \[ \text{At receiver: Divide } 1011001110 \text{ by } 1011 \] \[ \text{Remainder should be } 000 \text{ if no errors} \]

Error Correction Techniques

1. Hamming Code

A method that can detect and correct single-bit errors.

Hamming Code (7,4) Example: Data bits: d1 d2 d3 d4 Parity bits: p1 = d1 ⊕ d2 ⊕ d4 (checks positions 1,3,5,7) p2 = d1 ⊕ d3 ⊕ d4 (checks positions 2,3,6,7) p3 = d2 ⊕ d3 ⊕ d4 (checks positions 4,5,6,7) Codeword: p1 p2 d1 p3 d2 d3 d4 Error detection: Calculate syndrome bits: s1 = p1 ⊕ d1 ⊕ d2 ⊕ d4 s2 = p2 ⊕ d1 ⊕ d3 ⊕ d4 s3 = p3 ⊕ d2 ⊕ d3 ⊕ d4 Syndrome s3s2s1 points to error position

Example:

Data: \(1001\) (\(d_1=1, d_2=0, d_3=0, d_4=1\))

\[ p1 = 1 \oplus 0 \oplus 1 = 0 \] \[ p2 = 1 \oplus 0 \oplus 1 = 0 \] \[ p3 = 0 \oplus 0 \oplus 1 = 1 \] \[ \text{Codeword} = 0011001 \] \[ \text{Suppose bit } d_3 \text{ flips (position 6)}, \text{received} = 0011011 \] \[ \text{Calculate syndrome:} \] \[ s1 = 0 \oplus 1 \oplus 0 \oplus 1 = 0 \] \[ s2 = 0 \oplus 1 \oplus 1 \oplus 1 = 1 \] \[ s3 = 1 \oplus 0 \oplus 1 \oplus 1 = 1 \] \[ \text{Syndrome} = s_3s_2s_1 = 110_2 = 6_{10} \to \text{error in position 6} \]

2. Reed-Solomon Codes

Powerful error correcting codes used in CDs, DVDs, QR codes, etc.

Reed-Solomon Basics: - Works on symbols (groups of bits) rather than individual bits - Can correct multiple errors - Uses finite field arithmetic (Galois fields) - Parameters: (n, k) where n is total symbols, k is data symbols - Can correct up to \(\frac{n-k}{2}\) symbol errors - Generator polynomial: \(g(x) = \prod_{i=0}^{2t-1} (x - \alpha^i)\)

Advanced Forward Error Correction (FEC) Techniques

1. Convolutional Codes

Continuous encoding method with memory, used in wireless and deep-space communications.

Convolutional Encoder (Rate 1/2, K=3): +---+ +---+ | D |--->| ⊕ |--> Output 1 +---+ +---+ | | +---+ +---+ | D |--->| ⊕ |--> Output 2 +---+ +---+ Generator polynomials: g1 = [1 1 1] (octal 7) g2 = [1 0 1] (octal 5) Viterbi Algorithm: - Maximum likelihood decoding - Uses trellis diagram - Computes path metrics

Example:

Input: \(1011\)

\[ \text{State transitions:} \] \[ S_0 \xrightarrow{1/11} S_1 \xrightarrow{0/01} S_2 \xrightarrow{1/00} S_3 \xrightarrow{1/10} S_1 \] \[ \text{Output sequence: } 11\ 01\ 00\ 10 \] \[ \text{Constraint length } K=3, \text{Code rate } R=1/2 \]

2. Turbo Codes

Near-capacity codes using parallel concatenation and iterative decoding.

Turbo Encoder Structure: Input → Conv. Encoder 1 → Parity 1 ↓ Interleaver → Conv. Encoder 2 → Parity 2 Output: Systematic + Parity 1 + Parity 2 Key Parameters: - Component codes: Typically RSC (Recursive Systematic Convolutional) - Interleaver: Random or structured permutation - Code rate: Can be adjusted by puncturing

Performance:

\[ \text{SNR gap to Shannon limit} \approx 0.5\text{dB} \] \[ P_e \approx 10^{-5} \text{ at } E_b/N_0 = 0.7\text{dB} \text{ for rate } 1/2 \] \[ \text{Iterative decoding uses BCJR or SOVA algorithms} \]

3. LDPC (Low-Density Parity-Check) Codes

Sparse matrix codes with excellent performance and parallel decoding.

LDPC Characteristics: - Sparse parity-check matrix H (few 1's) - Tanner graph representation - Message-passing decoding (belief propagation) - Quasi-cyclic structure for efficient implementation Example H matrix: \[ H = \begin{bmatrix} 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 0 & 1 \\ \end{bmatrix} \]

Decoding Process:

\[ \text{Initialization: } q_{ij}(0) = P(x_i=0|y_i), q_{ij}(1) = P(x_i=1|y_i) \] \[ \text{Check node update: } r_{ji}(0) = \frac{1}{2} + \frac{1}{2}\prod_{i'\in V_j\setminus i}(1-2q_{i'j}(1)) \] \[ \text{Variable node update: } q_{ij}(0) = \alpha_{ij}f_i(0)\prod_{j'\in C_i\setminus j}r_{j'i}(0) \] \[ \text{Where } \alpha_{ij} \text{ is normalization factor} \]

4. Polar Codes

First provably capacity-achieving codes with low-complexity encoding/decoding.

Polar Code Construction: - Channel polarization: \(N\) identical channels → \(N\) polarized channels - Information bits on reliable channels - Frozen bits (known values) on unreliable channels - Generator matrix: \(G_N = B_NF^{\otimes n}\), where \(F = \begin{bmatrix}1 & 0\\1 & 1\end{bmatrix}\) - Successive Cancellation (SC) decoding

Example (N=4):

\[ \text{Channel indices sorted by reliability: } 3,2,1,0 \] \[ \text{For rate } R=0.5, \text{ use most reliable 2 channels} \] \[ \text{Information set } \mathcal{A} = \{3,2\}, \text{ frozen set } \mathcal{A}^c = \{1,0\} \] \[ \text{Encoding: } x_1^4 = u_1^4G_4, \text{ where } u_1=u_2=0 \] \[ G_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ \end{bmatrix} \]

Comparison of Techniques

Technique Type Error Capability Complexity Applications
Parity Detection Single-bit detection Low Memory, simple comms
Checksum Detection Burst errors Medium Network protocols
CRC Detection Multi-bit detection Medium Storage, networking
Hamming Correction Single-bit correction Medium ECC memory
Reed-Solomon Correction Multi-symbol correction High CDs, DVDs, QR codes
Convolutional FEC Depends on constraint length High (decoding) Wireless, deep-space
Turbo FEC Near Shannon limit Very High 3G/4G, satellite
LDPC FEC Near capacity High 5G, DVB-S2, Wi-Fi 6
Polar FEC Capacity-achieving Medium 5G NR control channels

Conclusion

Error detection and correction techniques form a hierarchy from simple parity checks to sophisticated capacity-approaching codes like LDPC and Polar codes. Modern communication systems (especially 5G) employ these advanced FEC techniques to achieve near-optimal performance while maintaining reasonable implementation complexity.

The field continues to evolve with new coding schemes and improved decoding algorithms, enabling reliable communication in increasingly challenging environments with higher data rates and lower latency requirements.

Comments

Popular posts from this blog

Physical Layer: Implement Encryption or Jamming-Resistant Modulation Technique

5G Core Architecture

Modulation Techniques