Journal of Mathematical Imaging and Vision 21: 27–41, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. A General Framework for Trajectory Triangulation * JEREMY YIRMEYAHU KAMINSKI AND MINA TEICHER Department of Mathematics and Statistics, Bar-Ilan University, Ramat-Gan, Israel kaminsj@math.biu.ac.il teicher@math.biu.ac.il Abstract. The multiple view geometry of static scenes is now well understood. Recently attention was turned to dynamic scenes where scene points may move while the cameras move. The triangulation of linear trajectories is now well handled. The case of quadratic trajectories also received some attention. We present a complete generalization and address the problem of general trajectory triangulation of moving points from non-synchronized cameras. Two cases are considered: (i) the motion is captured in the images by tracking the moving point itself, (ii) the tangents of the motion only are extracted from the images. The first case is based on a new representation (to computer vision) of curves (trajectories) where a curve is represented by a family of hypersurfaces in the projective space P 5 . The second case is handled by considering the dual curve of the curve generated by the trajectory. In both cases these representations of curves allow: (i) the triangulation of the trajectory of a moving point from non-synchronized sequences, (ii) the recovery of more standard representation of the whole trajectory, (iii) the computations of the set of positions of the moving point at each time instant an image was made. Furthermore, theoretical considerations lead to a general theorem stipulating how many independent con- straints a camera provides on the motion of the point. This number of constraint is a function of the camera motion. On the computation front, in both cases the triangulation leads to equations where the unknowns appear linearly. Therefore the problem reduces to estimate a high-dimensional parameter in presence of heteroscedastic noise. Several method are tested. Keywords: structure from motion, trajectory triangulation, mathematical methods in 3D reconstruction 1. Introduction The theory and practice of multiple view geometry is well understood when the scene consists of static point and line features. A summary of the past decade of work in this area can be found in [9, 14]. However recently a new body of research has appeared which consid- ers configurations of independently moving points, first with the pioneering work of Avidan and Shashua [1], ∗ This work was partially supported by the Emmy Noether Research Institute for Mathematics (center of the Minerva Foundation of Germany), the Excellency Center “Group Theoretic Methods in the Study of Algebraic Varieties” of the Israel Science Foundation, and EAGER (EU network, HPRN-CT-2009-00099). and then in other contributions [6, 11, 12, 19, 22–24]. A common assumption of these works is that the mo- tion must occur along a straight line or a conic section. When the motion is linear and at constant velocity, the recovery of the trajectory is done linearly. However for quadratic trajectories, the computations are non linear. Some authors have also considered the case where the motion is captured by tangential measurements within the images [21], but remained in the case of linear or quadratic trajectories. We present a complete generalization and address the question of general trajectory triangulation, from both point measurements and tangential measure- ments. More precisely we address the two following problems.