Results 1 - 10
of
108
The length of a typical Huffman codeword
- IEEE Trans. Inform. Theory
, 1994
"... If pi (i = 1,...,N) is the probability of the i-th letter of a memoryless source, the length li of the corresponding binary Huffman codeword can be very different from the value − log ∑pi. For a typical letter, however, li ∑ ≈ − log pi. More pre-pj < 2 −c(m−2)+2 cisely, P − m = where c ≈ 2.27. ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
If pi (i = 1,...,N) is the probability of the i-th letter of a memoryless source, the length li of the corresponding binary Huffman codeword can be very different from the value − log ∑pi. For a typical letter, however, li ∑ ≈ − log pi. More pre-pj < 2 −c(m−2)+2 cisely, P − m = where c ≈ 2.27.
Near Shannon limit error-correcting coding and decoding
, 1993
"... Abstract- This paper deals with a new class of convolutional codes called Turbo-codes, whose performances in terms of Bit Error Rate (BER) are close to the SHANNON limit. The Turbo-Code encoder is built using a parallel concatenation of two Recursive Systematic Convolutional codes and the associated ..."
Abstract
-
Cited by 1776 (6 self)
- Add to MetaCart
and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders. Consider a binary rate R=1/2 convolutional encoder with constraint length K and memory M=K-1. The input to the encoder at time k is a bit dk and the corresponding codeword
Self-Synchronization of Huffman Codes
, 2003
"... Variable length binary codes have been frequently used for communications since Huffman’s important paper on constructing minimum average length codes. One drawback of variable length codes is the potential loss of synchronization in the presence of channel errors. However, many variable length code ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
is corrupted by one or more bit errors, then as soon as the receiver by random chance correctly detects a self-synchronizing string, the receiver can continue properly parsing the bit sequence into codewords. Most commonly used binary prefix codes, including Huffman codes, are “complete”, in the sense
Abstract Algorithms for Infinite Huffman-Codes ∗ (Extended Abstract)
"... Optimal (minimum cost) binary prefix-free codes for infinite sources with geometrically distributed frequencies, e.g., P = {pi (1 − p)} ∞ i=0, 0 < p < 1, were first (implicitly) suggested by Golomb over thirty years ago in the context of run-length encodings. Ten years later Gallager and Van ..."
Abstract
- Add to MetaCart
Optimal (minimum cost) binary prefix-free codes for infinite sources with geometrically distributed frequencies, e.g., P = {pi (1 − p)} ∞ i=0, 0 < p < 1, were first (implicitly) suggested by Golomb over thirty years ago in the context of run-length encodings. Ten years later Gallager and Van
Algorithms for Updating Huffman Codes
"... Abstract:- Given a list W = [w1,…, wn] of n positive symbol weights, and a list L = [l1,…,ln] of n corresponding integer codeword lengths, it is required to find the new list L when a new value x is inserted in or when an existing value is deleted from the list of weights W. The presented algorithm ..."
Abstract
- Add to MetaCart
Abstract:- Given a list W = [w1,…, wn] of n positive symbol weights, and a list L = [l1,…,ln] of n corresponding integer codeword lengths, it is required to find the new list L when a new value x is inserted in or when an existing value is deleted from the list of weights W. The presented algorithm
Trellis-Based Joint Huffman and Convolutional Soft-Decision Priority-First Decoding
"... C be the corresponding Huffman code that has 26 codewords {c1, c2,..., c26} with average codeword length 4.15573. The respective lengths of the codewords are given by {`1, `2,..., `26}. According to P, a concatenation of 1000 symbols randomly generated from A is Huffman encoded into a bitstream x of ..."
Abstract
- Add to MetaCart
C be the corresponding Huffman code that has 26 codewords {c1, c2,..., c26} with average codeword length 4.15573. The respective lengths of the codewords are given by {`1, `2,..., `26}. According to P, a concatenation of 1000 symbols randomly generated from A is Huffman encoded into a bitstream x
Adapting the Knuth-Morris-Pratt algorithm for pattern matching in Huffman encoded texts
- Inf. Process. Manage
"... We perform compressed pattern matching in Huffman encoded texts. A modified Knuth-Morris-Pratt (KMP) algorithm is used in order to overcome the problem of false matches, i.e., an occurrence of the encoded pattern in the encoded text that does not correspond to an occurrence of the pattern itself in ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
We perform compressed pattern matching in Huffman encoded texts. A modified Knuth-Morris-Pratt (KMP) algorithm is used in order to overcome the problem of false matches, i.e., an occurrence of the encoded pattern in the encoded text that does not correspond to an occurrence of the pattern itself
D-ary Bounded-Length Huffman Coding
, 2007
"... Abstract — Efficient optimal prefix coding has long been accomplished via the Huffman algorithm. However, there is still room for improvement and exploration regarding variants of the Huffman problem. Length-limited Huffman coding, useful for many practical applications, is one such variant, in whic ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
, in which codes are restricted to the set of codes in which none of the n codewords is longer than a given length, lmax. Binary lengthlimited coding can be done in O(nlmax) time and O(n) space using the widely used Package-Merge algorithm. In this paper the Package-Merge approach is generalized in order
Ternary Tree and Clustering Based Huffman Coding Algorithm
, 2010
"... In this study, the focus was on the use of ternary tree over binary tree. Here, a new two pass Algorithm for encoding Huffman ternary tree codes was implemented. In this algorithm we tried to find out the codeword length of the symbol. Here I used the concept of Huffman encoding. Huffman encoding wa ..."
Abstract
- Add to MetaCart
In this study, the focus was on the use of ternary tree over binary tree. Here, a new two pass Algorithm for encoding Huffman ternary tree codes was implemented. In this algorithm we tried to find out the codeword length of the symbol. Here I used the concept of Huffman encoding. Huffman encoding
Ternary Tree and Memory-Efficient Huffman Decoding Algorithm
"... In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the en ..."
Abstract
- Add to MetaCart
In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property
Results 1 - 10
of
108