AbstractIn geometrical camera calibration, the objective is to determine a set of camera parameters that describe the mapping between 3D references coordinates and 2D image coordinates. In this paper, a technique of calibration and tracking based on both a least squares method is presented and a correlation technique developed as part of an augmented reality system. This approach is fast and it can be used for a real time system KeywordsCamera calibration, pinhole model, least squares method, augmented reality, strong calibration. I. INTRODUCTION HE term Augmented Reality (AR) is used to describe systems that blend computer generated virtual objects with real environments [1]–[2]. AR is defined as a technology in which a user’s view of the real world is enhanced or augmented with additional information generated by a computer [3]. This augmentation may include labels (text), 3D rendered models, or shading and illumination changes. AR allows a user to work with and examine the physical world [4], while receiving additional information about the objects in it. In order for AR to be effective, the real and computer- generated objects must be accurately positioned relative to each other and properties of certain devices must be accurately specified. This implies that certain measurements or calibrations need to be made at the start of the system [5]. Calibration is the first step in an AR system. Camera calibration in the context of three-dimensional computer vision is the process of determining the internal camera geometric and optical characteristics (intrinsic parameters) and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters) [6]. In many cases, the overall performance of the computer vision system strongly depends on the accuracy of the camera calibration [7]. Salim Malek is an attached researcher at the vision team, Robotics division, Advanced Technologies Development Center, CDTA (corresponding author: e-mail: s_malek@cdta.dz). Nadia Zenati-Henda is a Master researcher at the vision team, Robotics division, Advanced Technologies Development Center, CDTA (e-mail: nzenati@cdta.dz). Mahmoud Belhocine is a Master researcher at the vision team, Robotics division, Advanced Technologies Development Center, CDTA (e-mail: mbelhocine@cdta.dz). Samir Benbelkacem is an engineer at the vision team, Robotics division, Advanced Technologies Development Center, CDTA (e-mail: sbenbelkacem@cdta.dz). There are different methods used to estimate the parameters of the camera model. They are classified in three groups: Non linear optimization techniques: the camera parameters are obtained through iteration with the constraint of minimizing a determined function. This technique is used in many works [8]–[9]–[10]. Linear techniques which compute the transformation matrix: due to slowness and computational burden of the first technique, closed-form solutions have been also suggested. These techniques use the least squares method to obtain a transformation matrix which relates 3D points with their projections. This technique is fast and can be used in a real time application, but it ignores the nonlinear radial and tangential distortion components. Also, it was revised in several works [11]–[12]. Two-steps techniques. These approaches [13]-[14] consider a linear estimation of some parameters while the others are estimated iteratively. The technique described in this paper has been developed as part of an AR system. Furthermore, it can be used in other applications. The least squares method to calibrate the camera is used in this case and a correlation technique to track the virtual object in the images sequence. This paper is organized as follows: In Section II, a brief survey of a camera model is given. In Section III, the details of the camera calibration approach is presented, followed by a description of the technique of tracking in section IV. A description and discussion of experimental results are presented in Section V. Finally, in Section VI, conclusions are given. II. CAMERA MODEL The model is a mathematical formulation which approximates the behavior of any physical device, i.e. a camera. In such a case, the internal geometry and the position and orientation of the camera in the scene are modeled. In an AR system, there are both real entities in the user’s environment and virtual entities. Calibration is the process of estimating the parameters of camera in order to match the virtual objects with their physical counterparts. These parameters may be the optical characteristics of a physical camera as well as position and orientation information of various entities such as the camera and the various objects. Calibration Method for an Augmented Reality System S. Malek, N. Zenati-Henda, M. Belhocine and S. Benbelkacem T World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:2, No:9, 2008 2988 International Scholarly and Scientific Research & Innovation 2(9) 2008 scholar.waset.org/1307-6892/15193 International Science Index, Computer and Information Engineering Vol:2, No:9, 2008 waset.org/Publication/15193