Page 1

Scientific Journal of Impact Factor (SJIF): 4.72

e-ISSN (O): 2348-4470 p-ISSN (P): 2348-6406

International Journal of Advance Engineering and Research Development Volume 4, Issue 12, December -2017

Fast Huffman Coding Scheme Implementation On FPGA Boda Aruna1, D Srinivas2 Ravi Boda3 1

ECE department & MIETW,Hyderabad, India 2 ECE department & VJIT,Hyderabad, India 3 ECE department & UCE Osmania university,Hyderabad, India Abstract — Data compression plays a vital role in today’s Communication era, where the limitation in Bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Data must be compressed (Encoded). The present compression system follows Huffman compression algorithms for data compression. The rate of data compression is more time consuming during the decoding of these compressed data. Various works are been proposed for the enhancement of compressed data decoding. When it comes to low bit rate narrow band applications these algorithm fails to give an efficient decoding. This demand for an efficient fast processing decoding system for Huffman coding system. The main aim is to realize and implement the fast decoding Huffman coding system using FPGA. The proposed method enhances the speed of decoding operation based on VLC decoder partitioning concept. The existing Huffman coding system also implemented for the performance comparison. The proposed method is showing better performance as compared to existing Huffman coding method. This work is implemented using VHDL language and simulated on Active HDL 8.1 for its functional verification. Keywords-Fast Huffman coding, Data Compression, Decompression, VHDL,FPGA. I.

INTRODUCTION

Data compression has an undeserved reputation for being difficult to master, hard to implement, and tough to maintain. It is often referred to as coding, where coding is a very general term encompassing any special representation of data which satisfies a given need. Information theory is defined to be the study of efficient coding and its consequences, in the form of speed of transmission and probability of error. Data compression may be viewed as a branch of information theory in which the primary objective is to minimize the amount of data to be transmitted. Data compression is the process of converting data files into smaller files for efficiency of storage and transmission. As one of the enabling technologies of the multimedia revolution, data compression is a key to rapid progress being made in information technology. It would not be practical to put images, audio, and video alone on websites without compression. Compression is the process of representing information in a compact form. Data compression treats information in digital form that is, as binary numbers represented by bytes of data with very large data sets. For example, a single small 4. × 4. Size color picture, scanned at 300 dots per inch (dpi) with 24 bits/pixel of true color, will produce a file containing more than 4 megabytes of data. At least three floppy disks are required to store such a picture. This picture requires more than one minute for transmission by a typical transmission line (64k bit/second ISDN). That is why large data files remain a major bottleneck in a distributed environment. Although increasing the bandwidth is a possible solution, the relatively high cost makes this less attractive. Therefore, compression is a necessary and essential method for creating image files with manageable and transmittable sizes. In order to be useful, a compression algorithm has a corresponding decompression algorithm that, given the compressed file, reproduces the original file. This proposed work aims toward the realization and implementation of a fast decoding Huffman coding system. It enhances the speed of decoding operation based on VLC decoder partitioning concept. The remaining part of the paper organized as follows, Section II gives the Literature Survey on existing compression and decompression methods, In Section III proposed method is discribed II.

LITERATURE SURVEY

The problem of reducing test time and test data for core-based SoC has been attacked from several different angles in recent Literature. Novel approaches for compressing test data using the Burrows–Wheeler transform and runlength coding were presented in [8], [9]. These schemes were developed for reducing the time to transfer test data from a workstation across a network to a tester not for use on chips). Scan chain architectures for core-based designs that maximize bandwidth utilization are presented in [1]. A technique for compression/decompression of scans vectors using cyclical decompressors and run-length coding is described in [11]. A modular built-in self-test (BIST approach that allows sharing of BIST control logic among multiple cores is presented in [3]. A novel Technique for combining BIST and external testing across multiple Cores are described in [15]. The idea of statistically encoding test data was presented in [2]. They described a BIST scheme for no scan Circuits based on statistical coding using comma codes (very similar to Huffman codes) and run-length coding. An approach called “parallel Serial full scan (PSFS) for reducing test time in @IJAERD-2017, All rights Reserved

478


International Journal of Advance Engineering and Research Development (IJAERD) Volume 4, Issue 12, December-2017, e-ISSN: 2348 - 4470, print-ISSN: 2348-6406 cores is presented in [6]. A technique to reduce test data and test time by using specially Designed cores (cores with virtual scan chains) is presented in [6]. An Approach that uses a linear combinational expander circuit is described in [2]. The use of Golomb codes and frequency-directed run-length (FDR) codes for compressing test data have been demonstrated in [5]–[7], respectively. The use of variable length input Huffman codes for SOC test data compression has been proposed in [14]. A fixed-to-fixed block-encoding scheme is described in [8]. Techniques for reusing scan chains from other cores in an SOC to increase the test data bandwidth has been described in [11], and automatic test pattern generation (ATPG) techniques for producing test cubes that are suitable for encoding, using the above technique, have been described in [12]. A fault simulation-based technique to reduce the entropy of the test vector set by pattern transformation is described in [9]. Such transformations increase the amount of compression that can be achieved on the transformed test set using statistical coding. ATPG algorithms for producing test vectors that can more effectively be compressed using statistical codes have been described in [2] are referred. III.

PROPOSED METHOD

The high-speed decoder unit receives the input bit stream in serial from the encoder unit, and passes to the N-bit shifter where the serial data get transformed to parallel. This parallel data is then passed to the symbol decoder and length decoder for decoding the coded words back to original data. The symbol decoders are provides with portioned code books. The codebooks are aligned as LUT with equal code length. Each LUT contains all code words of equal length and are shorted depending on their length.

Fig 3.1:High Speed decoder for Huffman coding system The high speed Huffman decoder unit consists of the following modules Shift Register: the N bit shifter receiver the serial input from the encoder and transforms into parallel. This parallel data is then passed to the symbol decoder and length decoder for decoding the coded words back to original data. For every 6-clock pulse the shift register receives 6 coded bits and transfers it to data register of symbol decoder and length decoder. Symbol Decoder: The symbol decoders are provides with portioned code books. The codebooks are aligned as LUT with equal code length. Each LUT contains all code words of equal length and are shorted depending on their length. The symbol decoder scans the received code words with the LUT depending on the length of the code words. As each LUT consists of only one specific length codes the scanning time for tracing out the symbol is reduced. Length decoder: the length decoder calculates the required length from the code words and gives this information to the Symbol decoder for selecting appropriate LUT. The truth table for the generation of length count L1 to L6 for selecting the LUT is as given below.

@IJAERD-2017, All rights Reserved

479


International Journal of Advance Engineering and Research Development (IJAERD) Volume 4, Issue 12, December-2017, e-ISSN: 2348 - 4470, print-ISSN: 2348-6406 L1

L2

L3

L4

L5

L6

1

0

0

0

0

0

0

1

0

0

0

0

0

0

1

0

0

0

0

0

0

1

0

0

0

0

0

0

1

0

0

0

0

0

0

1

Table 3.1. Truth table for length decoder. Controller: The controlling of decoding operation is maintained by this block. Based on the length decoder output one among the 6 LUT are selected for tracing out the code word from the LUT. This receives the reset signal when the status signal goes high. The status signal is generated from the symbol decoder when a match of code word occurs. Design approach of Huffman coding system and Fast decoding Huffman coding system are is discussed IV.

III. RESULT AND DISCUSSION

The proposed design is tested considering a 12 x 32 memory data bits. The data used for the experiments in this section are the dynamically compacted test cubes. This section illustrates the statistical encoding for the given set of data. To encode the given data, it is divided into 4-bit blocks; each block is a 4-bit pattern. The dataset consist of 15 unique words divided in 4-bit block each. The reason for partitioning data into blocks is to keep the complexity of the decompression circuit and decompression delays low. Each block pattern is mapped into a variable length code word. The length of the code word depends on the probability with which each pattern occurs in the data set. Table 4.1 Huffman coding Scheme

@IJAERD-2017, All rights Reserved

480


International Journal of Advance Engineering and Research Development (IJAERD) Volume 4, Issue 12, December-2017, e-ISSN: 2348 - 4470, print-ISSN: 2348-6406

Fig 4.1 Simulation waveform for Huffman system under reset condition Figure shows the simulation results for the implemented design at reset condition. In the simulation result shown above signal „s_out1‟ indicates serial output generated at the output of the encoder module. Signal „u_pat1‟ shows the unique word passed to the encoder unit and signal „code1‟ shows the corresponding codeword for the concerned unique word.

Fig 4.2 simulation waveform showing code word generated shows the result for the code word generated for the given test set. The variable length code generated for the given test set is illustrated by the signal „code‟ as shown in the above figure. From the simulation result it can be observed that there are 14 code words generated for the given test set.

Fig 4.3 simulation result for the final data block decoded at 83.15μs. @IJAERD-2017, All rights Reserved

481


International Journal of Advance Engineering and Research Development (IJAERD) Volume 4, Issue 12, December-2017, e-ISSN: 2348 - 4470, print-ISSN: 2348-6406 above figure indicates the simulation result obtained for the existing Huffman coding system. From the simulation result it is observed that a total of 83.15μs is taken for the decoding of complete encoded data set. The signal „Test_set‟ shows the decoded data set retrieved for the obtained encoded data bits

Fig 4.4: showing the implementation of Huffman decoder design on to the targeted FPGA (Cyclone EP1C20F324C6) generated on Chip Planner of Quartus II Tool Figure shows the chip planner resource utilization of the targeted FPGA (Cyclone EP1C20F324C6) for the implemented Huffman coding system. This is carried out using FPGA Editor. It is observed that about 62% i.e. use a total of 12,968 Logic cells available in 20,600 logic cells the resource is utilized for the implementation of the existing Huffman coding system onto the targeted FPGA. V.

CONCLUSION

The research work implemented variable length symbol decoder integrated together for decoding the symbols. From the obtained simulation results it can been very clearly seen that the proposed high speed decoding system takes about 43.75μs for decoding a 12x32 test data set. The existing tree based Huffman decoder takes about 83.15μs for the decoding of the same data set. The implemented high speed decoder takes about 400 clock cycle less than the existing system. From these observations it can be concluded that the proposed system can give higher rate of decoding compared to the existing Huffman coding system. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

J. Aerts and E. J. Marinissen, “Scan chain design for test time reduction in core-based ICs,” in Proc. Int. Test Conf., 1998, pp. 448–457. I. Bayraktaroglu and A. Orailoglu, “Test volume and application time reduction through scan chain concealment,” in Proc. Design Automation Conf., 2001, pp. 151–155. F. Brglez, D. Bryan, and K. Kozminski, “Combinational profiles of sequential benchmark circuits,” in Proc. Int Symp. Circuits Syst., 1989, pp.1929–1934. M. L. Bushnell and V. D. Agrawal, Essentials of Electronic Testing For Digital, Memory, And Mixed-Signal VLSI Circuits. Norwell, MA: Kluwer, 2000. A. Chandra and K. Chakrabarty, “Test data compression for system-on-a-chip using Golomb codes,” in Proc. VLSI Test Symp., 2000, pp. 113–120. Anshuman Chandra, Krishnendu Chakrabarty “Efficient test data compression and decompression for system-on-achip using internal scan chains and Golomb coding,” in Proc. Design, Automation, Test Eur., 2001. Anshuman Chandra, Krishnendu Chakrabarty “Frequency-directed run-length codes with application to system-ona-chip test data compression,” in Proc. VLSI Test Symp., 2001, pp. 42–47. Anshuman Chandra, Krishnendu Chakrabarty “Reduction of SOC test data volume, scan power and testing time using alternating run-length codes,” in Proc.Design Automation Conf., 2002, pp. 673–678. R. Chandramouli and S. Pateras, “Testing systems on a chip,” IEEE Spectrum, pp. 42–47, Nov. 1996. D. Das and N. A. Touba, “Reducing test data volume using external/ LBIST hybrid test patterns,” in Proc. Int. Test Conf., 2000, pp. 115–122. R. Dorsch and H.-J. Wunderlich, “Reusing scan chains for test pattern decompression,” in Proc. European Test Workshop, 2001, pp. 124–132. R. Dorsch , H.J. Wunderlich “Tailoring ATPG for embedded testing,” in Proc. Int. Test Conf., 2001, pp. 530–537.

@IJAERD-2017, All rights Reserved

482


International Journal of Advance Engineering and Research Development (IJAERD) Volume 4, Issue 12, December-2017, e-ISSN: 2348 - 4470, print-ISSN: 2348-6406 [13] [14]

[15] [16]

A. El-Maleh, S. Al-Zahir, and E. Khan, “A geometric-primitives-based compression scheme for testing systems-ona-chip,” in Proc. VLSI Test Symp., 2001, pp. 54–59. P. Gonciari, B. M. Al-Hashimi, and N. Nicolici, “Improving compression ratio, area overhead, and test application time for systems-on-a-chip test data compression/decompression,” in Proc. Design Automation Test Eur., 2002, pp. 604–611. I. Hamzaoglu and J. H. Patel, “Test set compaction algorithms for combinational circuits,” in Proc. Int. Conf. Computer-Aided Design, 1998, pp. 283–289. Xrysovalantis Kavousianos, Emmanuel Kalligeros, and Dimitris Nikolos “Multilevel-Huffman Test-Data Compression for IP Cores With Multiple Scan Chains” IEEE Trans on Very Large Scale Integration (VLSI) Systems, VOL. 16, NO. 7, JULY 2008, Page no: 926-931 .

@IJAERD-2017, All rights Reserved

483

Fast huffman coding scheme implementation on fpga ijaerdv04i1239153  
Fast huffman coding scheme implementation on fpga ijaerdv04i1239153  
Advertisement