By codes, we generally mean linear subspaces \(C \subseteq \mathbb{F}_q^n\) for some finite field \(\mathbb{F}_q\) and some dimension \(n\). Sometimes, codes may not be linear, sometimes they may be inside more exotic finite sets like graphs or matrix rings over finite fields.
Let \(Q\) be the finite set of \(q\) elements called the alphabet. We call a subset \(C \subseteq {Q}^n\) a code on the alphabet set \(Q\). Individual elements of \(Q^n\) are called codewords.
The space \(Q^n\) with the Hamming metric is called a Hamming space.
We are interested in codes in \(Q^n\) that contains points which are far apart with respect to this metric. Hence, a useful parameter for a given code \(C \subseteq Q^n\) for us shall be
\[\begin{equation} d(C) := \text{inf} \{ d(x,y) \ | \ x,y \in C, x\neq y\}. \label{eq:defi_of_mind} \end{equation}\]
We often normalize it in the following manner. We can call the following the normalized minimum distance [1] or the relative minimum distance.
\[\begin{equation} \delta(C) := \frac{1}{n} d(C). \label{eq:defi_of_min} \end{equation}\]
We also define the information rate or simply called the rate which roughly measures the size of the code \(C\), as follows
\[\begin{equation} R(C) : = \log_{q^n}|C| = \frac{\log |C|}{ \log q^n} = \frac{\log_q |C| }{n} . \end{equation}\]
The goal of coding theory is to try to make \(\delta(C)\) as large as it can be once we only know the size \(R(C)\), or otherwise how large could \(R(C)\) be while knowing \(\delta(C)\). We let \(n\) to be very big to get asymptotic estimations.
See the page about LP bounds for hamming distance
See here
See the interesting paper:
https://arxiv.org/pdf/2503.10201
This page was updated on September 1, 2025.
Main Page