This article is cited in
2 papers
Information Theory and Coding Theory
An Effective Method for Encoding Information Sources Using the Fast Muliplication Algorithm
B. Ya. Ryabko
Abstract:
We consider the problem of encoding information sources with a priori known and unknown symbol probabilities. For both cases, we study the complexity of encoding and decoding depending on the redundancy
$r$ that is defined as the difference between the average code length and the entropy. The known coding methods can be split into two classes. For codes of one class, where the redundancy
$r$ is reduced,
$r\to 0$, the memory
$S$ and the average encoding/decoding time per symbol
$T$ grow as
$O(\exp(1/r))$ and
$O((-\log r))$ , respectively (if we implement the encoder and decoder in a computer). For the other codes,
$S=O(r^{-\rm{const}})$ and
$T=O(r^{-\rm{const}})$ as
$r\to 0$. We suggest a method combining the advantages of both classes of codes, where the memory grows as a power of
$1/r$ and the encoding/decoding time does not exceed a power of
$-\log r$ with decrease in the redundancy
$r$:
$S=O(r{-\rm{const}})$ and
$T=O(r^{-\rm{const}})$. (Everywhere in this paper, const is some number greater than or equal to 1.) The same method is used for constructing a fast enumerative encoding (see the definition in [T. M. Cover, IEEE Trans. Inf. Theory, 19, No. 1, 73–77 (1973); R. Ye. Krichevskii,
Information Compression and Retrieval, Radio i Svyaz, Moscow (1989)]).
UDC:
621.391.15
Received: 31.01.1994
Revised: 14.10.1994