VISIT OUR NEW YOUTUBE CHANNEL

Visit our new YouTube channel exclusively for Matlab Projects and Electrical Project @,YouTube-Matlab Projects YouTube-Electrical Projects

VLSI IEEE 2018 Projects at Chennai

Looking for VLSI 2018 Projects,Click Here or Contact @ +91 9894220795/+9144 42647783.For more details visit www.verilogcourseteam.com

Tuesday

A Lossless Data Compression and Decompression Algorithm and Its Hardware Architecture

Data compression is a method of encoding rules that allows substantial reduction in the total number of bits to store or transmit a file. Two basic classes of data compression are lossy data compression, which is widely used to compress image data files for communication or archives purposes. and lossless data compression that is commonly used to transmit or archive text or binary files.Lossless data compression algorithms mainly include Lempel and Ziv (LZ) codes, Huffman codes.In this project,instead, of Huffman code we consider a variant called the adaptive Huffman code which does not need to know the probability of the input symbols in
advance. 

Another most popular version of the LZ algorithm is called the wordbased LZ (LZW) algorithm, which is proposed by T.Welch and is adictionary-based method. In this method,the second element of the pair is removed, i.e., the encoder would only send the index to the dictionary. However, it requires quite a lot of time to adjust the dictionary. To improve this, two alternative versions of LZW were proposed, dynamic LZW (DLZW) and word-based DLZW (WDLZW)algorithms. Both improve LZW algorithm in the following ways. First, they initialize their dictionaries with different combinations of characters. Second, each entry in their dictionaries associates a frequency counter. However,they also complicate the hardware control logic. In order to reduce the hardware cost, a simplified DLZW architecture called parallel dictionary LZW (PDLZW) is proposed. In this architecture, it uses the hierarchical parallel dictionary set with successively increasing word.First, a virtual dictionary with the initial address space is reserved, which represents the set of input symbol .This dictionary only takes up a part of address space but actually has no cost in hardware. Second, the simplest dictionary update policy called first-in first-out (FIFO) is used to simplify the hardware implementation.

Therefore, in this project, we will propose a new two-stage data compression architecture that combines features from both PDLZW andAdaptive Huffman (AH) algorithms. The resulting architecture shows that it outperforms the AH algorithm in mostcases and requires only one-fourth of the hardware cost of the AH algorithm. In addition, its performance is competitive to the LZW algorithm. Furthermore, both compression and decompression rates are greater than those of the AH algorithm .The performance of the proposed algorithm and architecture in this paper is quite dependent on the dictionary size used in the PDLZW stage, which in turn determines the hardware cost of both PDLZW and the modified AH stages. 

No comments: