Igal Sason and Shlomo Shamai (2006), "Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial", Foundations and Trends® in Communications and Information Theory: Vol. 3: No. 1–2, pp 1-222. http://dx.doi.org/10.1561/0100000009

© 2009 I. Sason and S. Shamai

Download article
**In this article:**

1. A Short Overview

2. Union Bounds: How Tight Can They Be?

3. Improved Upper Bounds for Gaussian and Fading Channels

4. Gallager-Type Upper Bounds: Variations, Connections and Applications

5. Sphere-Packing Bounds on the Decoding Error Probability: Classical and Recent Results

6. Lower Bounds Based on de Caen's Inequality and Recent Improvements

7. Concluding Remarks

Acknowledgments

References

This article is focused on the performance evaluation of linear codes under optimal maximum-likelihood (ML) decoding. Though the ML decoding algorithm is prohibitively complex for most practical codes, their performance analysis under ML decoding allows to predict their performance without resorting to computer simulations. It also provides a benchmark for testing the sub-optimality of iterative (or other practical) decoding algorithms. This analysis also establishes the goodness of linear codes (or ensembles), determined by the gap between their achievable rates under optimal ML decoding and information theoretical limits. In this article, upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we address de Caen's based bounds and their improvements, and also consider sphere-packing bounds with their recent improvements targeting codes of moderate block lengths.

1. A Short Overview

2. Union Bounds: How Tight Can They Be?

3. Improved Upper Bounds for Gaussian and Fading Channels

4. Gallager-Type Upper Bounds: Variations, Connections and Applications

5. Sphere-Packing Bounds on the Decoding Error Probability: Classical and Recent Results

6. Lower Bounds Based on de Caen's Inequality and Recent Improvements

7. Concluding Remarks

Acknowledgments

References

*Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial* focuses on the performance evaluation of
linear codes under optimal maximum-likelihood (ML) decoding. Though the ML decoding algorithm is prohibitively complex for most practical
codes, their performance analysis under ML decoding allows to predict their performance without resorting to computer simulations. It also
provides a benchmark for testing the sub-optimality of iterative (or other practical) decoding algorithms.
This analysis also establishes the goodness of linear codes (or ensembles), determined by the gap between their achievable rates under optimal
ML decoding and information theoretical limits.

In *Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial*, upper and lower bounds on the error
probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss
various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class
of lower bounds, we address de Caen's based bounds and their improvements, and also consider sphere-packing bounds with their recent
improvements targeting codes of moderate block lengths.

*Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial* is a comprehensive introduction to this
important topic for students, practitioners and researchers working in communications and information theory.

**Erratum 5.3 The 1967 sphere-packing bound**

Commentary Submitted By: Igal Sason, Technion, Israel Intitute of Technology , sason@ee.technion.ac.il. Date Accepted: 8/10/2006

- Description: The correction refers to the statement in Theorem 5.11 (pp. 175-176). In the LHS of Eq. (5.49), it should be written $O_1\left(\frac{\ln N}{N}\right)$ instead of $O_1\left(\frac{1}{N}\right)$. The same correction also applies to the LHS of the first line in Eq. (5.50), in light of the expression in the RHS of this equation. Finally, the line before the end of Theorem 5.11 (p. 176) should read "scale like $\frac{\ln N}{N}$ and the inverse of the square root of $N$, respectively".

**Erratum 5.4 Sphere-packing bounds revisited for moderate block lengths**

Commentary Submitted By: Igal Sason, Technion, Israel Intitute of Technology , sason@ee.technion.ac.il. Date Accepted: 8/10/2006

- Description: In Theorem 5.12 (pp. 178-9), the definition of $\beta_{j,k,\rho}$ is missing. In Latex, it should be \begin{equation} \beta_{j,k,\rho} = P(j|k)^{\frac{1}{1+\rho}} \cdot \left(\sum_{k'} q_{k', \rho} P(j|k')^{\frac{1}{1+\rho}} \right)^{\rho} \end{equation} and this equation should appear between (5.57) and (5.58).