Kravchuk polynomials

Kravchuk polynomials are defined by the following equation. For \(k \in \mathbb{N}\), \(2 \le q \in \mathbb{N}\) and \(n \in \mathbb{N}\), we have

\[\begin{equation}\label{eq:defi_of_krav} K_k(x;n,q): = \sum_{j=0}^{ k} (-1)^{j} (q-1) ^{k-j} {x \choose j} { n - x \choose k - j}, \end{equation}\]

where \({ x \choose j}\) means \(\frac{(x)_j}{j!}\), where \((x)_j\) is the falling factorial given by \[\begin{equation} (x)_j := x(x-1)(x-2)\dots (x-j+1). \end{equation}\]

We will denote \(K_k(x;n,q)\) as \(K_k(x)\) when there is no ambiguity. Note that \(K_0(x)\) is always the constant polynomial \(1\). Also, using the fact that the degree of \((x)_j\) is \(j\) and \((n-x)_{k-j}\) is \(k-j\), the degree of \(K_k(x;n,q)\) can be shown to be \(k\) in \(x\) (the coefficient of \(x^{k}\) in \(K_k(x;n,q)\) is \(\frac{(-q)^k}{k!}\)).

Fourier transforms of spheres

Choose a primitive \(q\)th root of unity \(\zeta \in \mathbb{C}\). Let \(Q^n\) be the Hamming space with \(q\) alphabets.

For any fixed \(x \in Q^n\) whose weight is \(w(x)\), the following equation holds for a fixed \(k\). \[\begin{equation} \sum_{\substack{y \in Q^n \\ w(y) = k}}^{ } \zeta^{\langle x, y\rangle} = K_k( w(x) ) . \end{equation}\] Here \(\langle x, y\rangle\) is the sum \(\sum_{1\le i \le n}^{} x_i y_i\), where \(x=(x_1,x_2, \dots , x_n)\) and \(y = (y_1, y_2 ,\dots ,y_n)\).

The proof follows via direct computations.

Consider the \(\mathbb{C}\)-vector space \(H=\mathop{\mathrm{Maps}}(Q^n, \mathbb{C})\) and recall the Fourier transform \(\mathcal{F}\). The above lemma tells us that if \(0 \le k \le n\) is fixed and \(f_k=\mathbb{I}_{w( \bullet ) = k} \in H\) is the indicator function that gives 1 if weight of the argument is \(k\) and 0 otherwise, then \(\mathcal{F}(f_k)(x)=K_{k}(w(x))\).

In this interpretation, Kravchuk polynomials are Fourier transforms of spherical distributions. Hence, they are analogous to Bessel functions.

Properties

Using the Fourier transform on Hamming spaces, we can prove a lot of properties of the Kravchuk polynomials.

Exchanging index and argument

\[\begin{equation} (q-1)^{i} {n \choose i} K_k(i) = (q-1)^{k}{n \choose k} K_i(k), \end{equation}\] where \(i, k \in \left\{ 0,1,\dots,n \right\}\).

\[\begin{align} (q-1)^{i} {n \choose i} K_k(i) = \sum_{w(x) = k} \sum_{w(y) = i}^{} \zeta^{\langle x,y\rangle} = (q-1)^{k}{n \choose k} K_i(k), \end{align}\]

Power series

For a formal variable \(z\), the following holds. \[\begin{equation} \sum_{ k = 0 }^{ n} K_k(x) z^k = (1+(q-1)z)^{n-x} (1-z)^x.\end{equation}\]
It is immediate after using the binomial series for the two multiplicants on the right side.

Recurrence formula

Recursion on indices

For \(k \in \{ 1,\dots, n-1\}\), \[\begin{equation}\label{eq:recur_on_krav} (k+1)K_{k+1}(x) = \left( k + (q-1)(n-k) -qx\right) K_k(x) - (q-1)(n-k+1)K_{k-1}(x) . \end{equation}\]

Since the degree on either side is \(k+1 \le n\), it is sufficient to show the equality for \(x = 0,1,\dots,n\). So we substitute \(x = w(y)\) where \(y \in Q^n\), and hence we need to prove the equality for all \(y \in Q^n\). What we therefore really want to show is the following equality for all $ y Q^n$,

\[\begin{align} w(y) K_k( w(y)) = - \left( \frac{k+1}{q} \right) K_{k+1}( w(y)) + \left( \frac{k}{q} + \left( 1 - \frac{1}{q} \right) \left( n-k \right) \right) K_k( w(y)) \\ - \left( 1- \frac{1}{q} \right)(n-k+1) K_{k-1}(w(y)). \label{eq:equality_in_U} \end{align}\]

Now since we know that \(K_k \circ w = \mathcal{F}(f_k)\), where \(f_k\) is the weight-indicator function \(\mathbb{I}_{w(\bullet)=k}\). Now define the function \(h \in U\) as

\[\begin{equation} h(x): = \begin{cases}n\left( 1- \frac{1}{q} \right) & \text{if } w(x) = 0 \\ -\frac{1}{q} & \text{if } w(x) =1 \\ 0 & \text{otherwise} \end{cases} . \label{eq:defi_of_h} \end{equation}\]

We invite the reader to check that \(\mathcal{F}(h)=w\), the weight function. Now using the convolution theorem of this Fourier transform, we modify this and get \[\begin{align} \mathcal{F}(h * f_k) = - \left( \frac{k+1}{q} \right) \mathcal{F}(f_{k+1}) + \left( \frac{k}{q} + \left( 1 - \frac{1}{q} \right) \left( n-k \right) \right) \mathcal{F}(f_k) \\ - \left( 1- \frac{1}{q} \right)(n-k+1) \mathcal{F}(f_{k-1}). \end{align}\] Since \(\mathcal{F}\) is a \(\mathbb{C}\)-linear isomorphism, we only need to prove that the following is true.

\[\begin{align} h * f_k = - \left( \frac{k+1}{q} \right) f_{k+1} + \left( \frac{k}{q} + \left( 1 - \frac{1}{q} \right) \left( n-k \right) \right) f_k - \left( 1- \frac{1}{q} \right)(n-k+1) f_{k-1}. \end{align}\] This now is quite easy to check. For any \(y \in Q^n\), we know that \(h*f_k(y) = \sum_{x \in Q^n}^{} h(x) f_k(y-x)\), and \(h(x)\) is non-zero only for \(w(x) \le 1\), and when \(w(x) \le 1\) we must have \(w(y)-1 \le w(y-x) \le w(y)+1\). Since \(f_k(y-x)\) is non-zero only when \(w(y)=k\), we conclude that \(h*f_k(y)\) is non-zero only if \(w(y) \in \left\{ k-1,k,k+1 \right\}\). Trying each of these three cases gives the respective coefficients of \(f_{k-1}, f_{k}\) and \(f_{k+1}\) on the right.

In [1] and [2], this result is obtained by “differentiating” the following formal power series with respect to \(z\). \[\begin{equation} \sum_{ k = 0 }^{ \infty} K_k(x;n,q) z^k = (1+(q-1)z)^{n-x} (1-z)^x . \end{equation}\]

This has the benefit that the condition \(k \in \{ 1,2,\dots,n-1\}\) can be relaxed.

Recursion on argument

Another recursion formula is the one below

For \(k \in \left\{ 1,2,\dots,n-1 \right\}\) and \(i \in \left\{ 0,\dots , n \right\}\), we obtain \[\begin{equation} (n-k) (q-1)K_i(k+1) = \left( k + (q-1)(n-k) -qi \right) K_i(k) - k K_i(k-1) . \end{equation}\]

We now simply substitute our symmetry relations from above into the previous formula and obtain the following (after cancelling out \(\frac{(q-1)^{k}}{(q-1)^{i}{ n \choose i} }\)).

\[\begin{align} (k+1){ n \choose k+1} (q-1)K_i(k+1) & = \left( k + (q-1)(n-k) -qi \right){n \choose k} K_i(k)\\ &\ \ \ -\ (q-1)(n-k+1) { n \choose k-1 } (q-1)^{-1}K_i(k-1) . \end{align}\]

Using that \(\binom{n}{k} = \frac{n+1-k}{k} \binom{n}{k-1}\) and \({ n \choose k+1} = \frac{ n - k }{ k + 1} { n \choose k }\), we get that \[\begin{align} (n-k) (q-1)K_i(k+1) & = \left( k + (q-1)(n-k) -qi \right) K_i(k) - k K_i(k-1) . \end{align}\]

Christoffel-Darboux formula

Another required result is the so-called Christoffel-Darboux formula. In both [1] and [2], the Christoffel-Darboux formula is stated for \(q=2\) with no clarification. This is the general version.

For \(k \in \left\{ 0, 1,\dots,n-1 \right\}\), the following identity is true in \(\mathbb{C}[x,y]\). \[\begin{align} K_{k+1}(x)K_{k}(y) - K_{k+1}(y)K_k(x) = (y-x) \left[ \frac{q}{k+1} {n \choose k } \sum_{i=0}^{k} (q-1)^{k-i}{ n \choose i }^{\hspace{-0.15cm} -1} \hspace{-0.15cm} K_i(x) K_i(y) \right]. \end{align}\]

For \(k=0\), it is trivial. For \(k\ge 1\), it follows from Lemma that \[\begin{align} & {n\choose k}^{\hspace{-0.15cm}-1}\hspace{-0.15cm} (k+1)[K_{k+1}(x)K_k(y) - K_{k+1}(y) K_{k}(x) ] =\\ & q {n\choose k } ^{\hspace{-0.15cm}-1} \hspace{-0.15cm} (y-x)K_k(x)K_k(y) + (q-1) {n\choose k-1}^{\hspace{-0.15cm}-1}\hspace{-0.15cm}k\left[ K_k(x)K_{k-1}(y) - K_{k}(y)K_{k-1}(x) \right] . \\ \end{align}\] Notice the last term in just the starting term with a lower index times \(q-1\). The rest follows from induction on \(k\).

Christoffel-Darboux equations are general results that hold for orthogonal polynomials. Indeed, Kravchuk polynomials \(\left\{ K_k(x) \right\}_{0 \le k \le n}\) are orthogonal polynomials on the domain \([0,n]\) as they satisfy \[\sum_{i=0}^{n} {n \choose i} (q-1)^{i} K_r(i) K_s(i) = q^n (q-1)^{r} { n \choose r } \delta_{r s }.\] One can easily prove this orthogonality identity using Remark and the `Parseval identity’ of Remark . See [3] for the general theory.

Multiplication of Kravchuk polynomials

Another lemma could tell us how the Kravchuk polynomials multiply.

For \(k, k' \in \left\{ 0 , 1, \dots , n \right\}\), the following identity holds for Kravchuk polynomials. \[\begin{equation} K_{k}(x) K_{k'}(x) = \sum_{ j = 0 }^{ n } c_{j, k , k'} K_j(x), \end{equation}\] where \(c_{j , k , k'} = \# \{ (x , y) \in Q^n \times Q^n \ | \ w(x) = k , w(y) = k', w(x+y) = j\}\). In particular, \(c_{j, k , k'} \ge 0\).

It is sufficient to check that the convolution of indicator functions gives \[\begin{align} f_k * f_{k'} = \sum_{j=0}^{n} c_{j , k ,k'} f_j. \end{align}\] Take the Fourier transform to get the identity for integers and then argue by the degree of polynomials to finish the proof.

Interlacing of roots

And now finally, we have the interlacing property of the roots of the Kravchuk polynomials.

For the Kravchuk polynomials \(\{ K_k(x)\}_{0 \le k \le n }\),

We will do this by induction. First, note that \(K_0(x)=1\) and \(K_1(x) = n(q-1) - qx\) and therefore has one simple zero in \([0,n]\) for \(K_1(x)\). Now assume the lemma for all Kravchuk polynomials with indices \(\le k\) and look at at the recurrence formula to conclude the proof for \(K_{k+1}(x)\) by the following argument:

We know by induction that the following holds. \[\begin{equation} 0 < x^{(k)}_1 < x^{(k-1)}_1 < x^{(k)}_2 < \dots < x^{(k)}_k < x^{(k-1)}_k < x^{(k)}_{k+1} < n . \end{equation}\]

Note that \(K_i(0) = {n\choose i} (q-1)^{i}\) and \(K_i(n) = (-1)^{i} {n \choose i} (q-1)^{i}\). Substituting \(\{ x^{(k)}_i\}_{1\le i \le n} \cup \{ 0,n\}\) in the recurrence formula makes us conclude that \(K_{k+1}(0) > 0\), \(K_{k+1}(x^{(k)}_1) < 0\), \(K_{k+1}(x^{(k)}_2) > 0\), and so on. This gives us at least one zero in each subinterval and each zero will have to be simple by the pigeonhole argument.

The two properties are also the result of the general theory of orthogonal polynomials. [3] gives several different proofs of these properties for the general case.

Asymptoptic growth of first root

We now have all the ingredients to produce our most important asymptotic result. By \(\lfloor x \rfloor\) we mean the greatest integer smaller or equal to \(x \in \mathbb{R}\).

Let \(\lambda \in (0,1)\), and \(\{ x^{(\lfloor \lambda n \rfloor,n)}_1\}_{n \ge 1}\) be the sequence of first roots of the Kravchuk polynomials \(K_{\lfloor \lambda n\rfloor}(x;n,q)\). For all large enough \(n\) and \(\lambda \in (0,1)\), we eventually have that \[\begin{equation} \frac{ x_1^{(\lfloor \lambda n\rfloor)}}{n} \le 1- \frac{1}{q} - \left( 1 - \frac{2}{q} \right) \lambda - \frac{2}{q}\sqrt{(q-1)\lambda(1-\lambda)}. \end{equation}\]

We will prove this by contradiction. Let \(r\) be equal to the term on right side of the inequality. Suppose the claim is false, then we have some \(\varepsilon > 0\) such that $x_1^{( n)}/n r + 2$ for infinitely many \(n\). For such an \(n\), let \(t = \lfloor \lambda n\rfloor\), and we keep in mind that \(n\) can be larger than any arbitrary threshold.

Now take \(i = \lfloor n (r+\varepsilon) \rfloor\).

Using the big-\(O\) notation and observing that \(| i - x_j^{(t)}| \ge | i - x_1^{(t)}| \ge \varepsilon n\), we can write that \[\begin{align} \log\left( \frac{K_t(i+1)}{K_t(i)} \right) &= \sum_{j=1}^{t} \log\left(1 + \frac{1}{i-x^{(t)}_j}\right)\\ &= \sum_{j = 1}^{t} \left[ \frac{1}{i - x^{(t)}_j} - \frac{1}{2}\frac{1}{(i - x^{(t)}_j)^{2}} + \frac{1}{3} \frac{1}{(i-x_j^{(t)})^{3}} \dots \right] \\ &= \sum_{j = 1}^{t}\left[ \frac{1}{i - x^{(t)}_j} - \frac{1}{2}\frac{1}{(i - x^{(t)}_1)^{2}} \dots \right] \\ &= \left[ \sum_{j = 1}^{t} \frac{1}{i - x^{(t)}_j} \right] + O\left( \frac{t}{ \varepsilon ^{ 2} n^{2}} \right) = \sum_{j = 1}^{ t} \frac{1}{i - x_j^{(t)}} + O\left( \frac{1}{n} \right). \end{align}\]

Similarly, we can write that

\[\begin{align} \log\left( \frac{K_t(i)}{K_t(i-1)} \right) = \sum_{j = 1}^{t} \frac{1}{i - x_j^{(t)}} + O\left( \frac{1}{n} \right). \end{align}\] Now let \(\rho = \log\left( \frac{K_t(i+1)}{K_t(i)} \right)\), therefore we get that \[\begin{align} \log\left( \frac{K_t(i)}{K_t(i-1)} \right) = \rho \left( 1 + O\left( \frac{1}{n} \right) \right). \end{align}\] Also note that as \(n \rightarrow \infty\), \(\log(\rho) \le \frac{t}{\varepsilon n} + O\left( \frac{1}{n} \right)\) is bounded. Now recall from Lemma that \[\begin{align} & (n-i) (q-1)K_t(i+1) = \left( i + (q-1)(n-i) -qt \right) K_t(i) - i K_t(i-1) \\ \Rightarrow & (n-i)(q-1) \frac{K_t(i+1)}{K_t(i)} = ( i + (q-1)(n-i) - qt ) - i \frac{K_t(i-1)}{ K_t(i)} \\ \Rightarrow & (n-i)(q-1) \frac{K_t(i+1)}{K_t(i)} \frac{K_t(i)}{K_t(i-1)} = ( i + (q-1)(n-i) - qt ) \frac{K_t(i)}{K_t(i-1)} - i \\ \Rightarrow & (n-i)(q-1) \left( \rho^{2} + O\left( \frac{1}{n} \right) \rho \right)= ( i + (q-1)(n-i) - qt ) \rho - i \\ \Rightarrow & \left(1-\frac{i}{n}\right)(q-1) \rho^{2} + O\left(\frac{1}{n} \right)= \left( \frac{i}{n} + (q-1)\left(1-\frac{i}{n}\right) - q\frac{t}{n} \right) \rho - \frac{i}{n} . \end{align}\] The last implication is because \(\rho\) is bounded. Adjusting slightly the last equation yields \[\begin{align} \left(\rho - \frac{\left( \frac{i}{n} +(q-1) \left( 1- \frac{1}{n} \right) - q \frac{t}{n} \right)}{ 2 (q-1) \left( 1 - \frac{i}{n} \right)} \right)^{2} = \frac{1}{4} \left( \frac{\left( \frac{i}{n} + (q-1)\left( 1 - \frac{i}{n} \right) - q \frac{t}{n} \right)}{ (q-1)\left( 1- \frac{i}{n} \right) } \right)^{2} - \frac{i}{n(q-1) \left( 1 - \frac{i}{n} \right)} + O \left( \frac{1}{n} \right). \end{align}\] Since \(\rho\) is real, we will have to conclude that the right-hand side is positive. After putting \(\frac{i}{n} = r + \varepsilon + O\left( \frac{1}{n} \right)\) and \(\frac{t}{n} = \lambda + O\left( \frac{1}{n} \right)\), we get that \[\begin{align} & \frac{1}{4} \left( \frac{ r+ \varepsilon + (q-1)\left( 1 - r - \varepsilon \right) - q \lambda }{ (q-1)\left( 1- r - \varepsilon \right) } \right)^{2} - \frac{r+ \varepsilon}{ (q-1)(1-r- \varepsilon)} + O \left( \frac{1}{n} \right) \ge 0 . \end{align}\] Put \(r' = r+ \varepsilon\), and then we get that \[\begin{align} & (r' + (q-1)(1-r') - q\lambda)^{2} - 4(q-1)(1-r')r' + O\left( \frac{1}{n} \right) \ge 0\\ \Rightarrow & (r' (q-2) + q \lambda - (q-1))^{2} - 4(q-1) r' (1-r') + O\left( \frac{1}{n} \right) \ge 0\\ \Rightarrow & \ r'^{2} \left[ (q-2 )^{2} + 4(q-1) \right] - 2r'\left[ (q-2)(q-1) -q(q-1) \lambda + 2 (q-1) \right] + [(q-1)-q\lambda]^{2} + O\left( \frac{1}{n} \right) \ge 0\\ \Rightarrow & \ r'^{2} q^{2} - 2r'\left[ q(q-1) - q(q-2)\lambda \right] + [(q-1)-q\lambda]^{2} + O\left( \frac{1}{n} \right) \ge 0. \end{align}\]

Now notice that the above is a polynomial in \(r'\), whose roots are

\[\begin{equation} \label{eq:notice_this_poly} \left( 1 - \frac{1}{q} \right) - \left( 1 - \frac{2}{q} \right) \lambda \pm \frac{2}{q} \sqrt{(q-1)\lambda(1-\lambda)} + O\left( \frac{1}{n} \right). \end{equation}\] The smaller of the two roots is \(r + O\left( \frac{1}{n} \right)\) and \(\varepsilon\) can be chosen such that \(r+ \varepsilon\) lies in the interval spanned by the two roots where the polynomial is negative. This contradicts the existence of \(\varepsilon\) and therefore our proof is complete.

References

1.
J. H. van Lint, Introduction to coding theory (Springer Berlin Heidelberg, 1998).
2.
F. J. MacWilliams & N. J. A. Sloane, The theory of error-correcting codes (North-Holland Publishing Company, 1977).
3.
G. Szegö, Orthogonal polynomials (American Mathematical Society, 1967).

This page was updated on February 2, 2022.
Main Page