A high level Hardware Architecture Binarizer for H.264/AVC CABAC Encoder N. Jarray, S. Dhahri, M. Elhaji and A. Zitouni Electronic and Micro-Electronic Laboratory, Faculty of Sciences of Monastir, Monastir 5000, Tunisia [jarray.nedra, dahrisalah, abdelkrim_zit]@yahoo.fr AbstractThe standard H.264/AVC represents an enormous step forward in the field of the technologies of video compression. It guarantees a better efficiency of compression because of the biggest precision of the predictive functions and the better tolerance of the errors, also provides new possibilities for the creation of video encoders offering video flows of higher quality, frequency of image more important and higher resolution. In the standard H.264, there are two entropy coders which are distinguished from the other video standards, CAVLC (Context Adaptive Variable Length Coding) and CABAC (Context Adaptive Binary Arithmetic Coding). In this paper, we present the various blocks binarization, context- model and arithmetical binary coding) of the encoder CABAC. In addition, a new hard architecture of binarization process of CABAC has been proposed which supports all binarization techniques. These results were achieved by using the ModelSim tool for the simulation and the ISE tool for the synthesis targeting the FPGA technology of Xilinx. The proposed design consumes about 400 slices and works on the 145.15 MHz frequency. I. INTRODUCTION H.264/AVC is the newest international video coding standard. It has been adapted as videophone, HDTV, digital Multimedia broadcasting. This norm provides much higher performance than the last standards. It consists of many advanced compression techniques such as Integer DCT, Quantification, and Prediction Intra/inter frame and Entropy coding [5]. H.264/AVC employs two kinds of entropy coding, Context-based Adaptative Variable length Coding (CAVLC) and Context Adaptative Binary Arithmetic Coding (CABAC). CABAC is a shape of entropy coding used in H.264/AVC video coding. It is a lossless compression technique, in addition it is lonely supported in Main and Higher profiles and needs a large amount of processing to decode compared to similar algorithms. Also, CAVLC [4] is a form of entropy coding used in H.264/AVC; it is a lossless compression technique. Unlike CABAC, CAVLC is supported in all H.264 profiles and is used to encode only the residual coefficient. In comparison with CAVLC, CABAC is more effective, achieves an average more important between 9% -14% of bit rate saving CAVLC. In literature, much work can be found showing the CABAC architecture. However, most of the work doesn’t describe well enough the processing stages of CABAC. Nonetheless, there are some works on hardware design that present a binarization process. Similarly in [1] the architecture proposed supports all techniques of binarization defined in H.264/AVC. In [2] the architecture suggested implementing the function of binarization and context index for each bin in parallel. The rest of this paper is organized as follows: in part 2, we present the CABAC algorithm, and then in part 3, we introduce the proposed architecture, later the implement results and comparisons are presented in part 4. II. THE CABAC ALGORITHM The CABAC encoder block Diagram is presented in Figure1. It consists of three steps: Binarization, Context modeling and Arithmetic coding. SE Bin i Binarisat Context Arithmet Figure1: Encoder block Diagram. BINARISATION The binarisation [3] process translates a non_binary syntax element (SE) (defined in H.264/AVC standard) to a bin string. In CABAC, there are four basic methods: unary code, truncated unary code, fixed length code and exp-Golomb code are defined to binarize most of the SE. In addition, some of SE concatenation of the basic binarization schemas are used and for the SE of MBTYP and SMBPTY is binarized by look up table. CONTEXT MODELING The context modeler [3] reads in the bin string and generates context value according to neighboring data of the top and left macro bloc. The value context is an index to the context table built at the beginning of processing a new slice. One of the most important properties of arithmetic coding is the possibility to utilise a clean interface between modeling and coding, so that in the modeling steps, a model probability distribution is assigned to the given symbols, which then in the coding stage drives the