International Conference & Workshop on Recent Trends in Technology, (TCET) 2012 Proceedings published in International Journal of Computer Applications® (IJCA) 10 Video Compression Using MPEG Sangeeta Mishra Sudhir Sawarkar TCET,Mumbai DMCE Kandivli(E)-101 Airoli ABSTRACT In this paper the mpeg compression technique is being performed. The different encoding and decoding time were calculated .After observing for different no. of frames per iteration it was observed that encoding takes few minutes to encode the data but the decoding procedure is finished of in few seconds. i.e. if encoding is done the decoding is very fast. General Terms Communication Keywords Video Compression, MPEG , Frames 1. INTRODUCTION In applications of multimedia technique, the transfers of the video and the audio data are very troublesome. Because the multimedia data is so large that the QoS of the multimedia transfer is very poor. The technology of the data compression can resolve this problem in the condition of the small bandwidth. So the multimedia data compression has become an important issue recently. Block Matching (BM) is a very important stage in the video compression, and it provides an effective way to estimate an object's motion from time varying image sequences. In the algorithm, each image frame is divided into non-overlapping blocks, and the best displacement vector between two consecutive frames is searched for each block. In the past two decades, there has been extensive research into motion estimation techniques. Block-based matching has been widely adopted by international standards such as the H.261, H.263, MPEG-2 and MPEG-4[1][2] due to its effectiveness and robustness. Therefore, most of the research works have been concentrated on optimizing the block-based motion estimation technique. Video images can be regarded as a three-dimensional generalization of still images, where the third dimension is time. Each frame of a video sequence can be compressed by any image compression algorithm. A method where the images are separately coded by JPEG is sometimes referred as Motion JPEG (M-JPEG). A more sophisticated approach is to take advantage of the temporal correlations; i.e. the fact that subsequent images resemble each other very much. This is the case in the latest video compression standard MPEG (Moving Pictures Expert Group). 2. MPEG MPEG standard consists of both video and audio compression. MPEG standard includes also many technical specifications such as image resolution, video and audio synchronization, multiplexing of the data packets, network protocol, and so on. Here we consider only the video compression in the algorithmic level. The MPEG algorithm relies on two basic techniques Block based motion compensation DCT based compression MPEG itself does not specify the encoder at all, but only the structure of the decoder, and what kind of bit stream the encoder should produce. Temporal prediction techniques with motion compensation are used to exploit the strong temporal correlation of video signals. The motion is estimated by predicting the current frame on the basis of certain previous and/or forward frame. The information sent to the decoder consists of the compressed DCT coefficients of the residual block together with the motion vector. There are three types of pictures in MPEG: Intra-pictures (I) Predicted pictures (P) Bidirectionally predicted pictures (B) An I-frame is an 'Intra-coded picture', in effect a fully specified picture, like a conventional static image file. P-frames and B-frames hold only part of the image information, so they need less space to store than an I-frame, and thus improve video compression rates. A P-frame ('Predicted picture') holds only the changes in the image from the previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P-frame, thus saving space. P-frames are also known as delta-frames. A B-frame ('Bi-predictive picture') saves even more space by using differences between the current frame and both the preceding and following frames to specify its content. Typically, pictures (frames) are segmented into macroblocks, and individual prediction types can be selected on a macroblock basis rather than being the same for the entire picture, as follows: I-frames can contain only intra macroblocks P-frames can contain either intra macroblocks or predicted macroblocks B-frames can contain intra, predicted, or bi- predicted macroblocks Furthermore, in the video codec H.264, the frame can be segmented into sequences of macroblocks called slices, and instead of using I, B and P-frame type selections, the encoder can choose the prediction style distinctly on each individual slice. Also in H.264 are found several additional types of frames/slices: