FIAS Frankfurt Institute for Advanced Studies Ontogenesis of Invariance Transformations Urs Bergmann and Christoph von der Malsburg MailTo: ubergmann@fias.uni-frankfurt.de Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe Universit¨ at Frankfurt Motivation We are describing a model for prenatal self-organization of the invariance transformations necessary for correspondence-based vision [1, 2]. The specific emerging circuits overcome several problems of previous versions: speed of recognition [3, 4] smaller search space [5] reduced number of connections of order O ( ( N )) [6] 1: Schematic of correspondence-based recognition. Left: One control unit per link [5]. Right: A more parsimonious architec- ture uses single units to control several links belonging to the same transformation. Why prenatal? evidence from precocial animals phylogenetically old problem simplifies postnatal learning significantly external stimulation not necessary no disturbance from external stimuli Map formation mechanisms Crucial ingredients of (activity-based) map-formation mechanisms: Competitition in columns and rows to enforce 1-1 mapping Cooperation of neighboring weights to favor topology-preserving mappings 2: Effective (after adiabatic approximation) weight interac- tions for a model of activity-based retino-tectal wiring. aussler equations [7]: W τρ = α + F τρ W τρ - W τρ B τρ ( α + FW ) B τρ (X)= τ X τ ρ + ρ X τρ /2N where F τρ mediates the cooperation via a low-passed filtered version of the weights themselves B τρ implements the competition of synaptic growth Multimap Def : A Multimap is a set of maps with different transformation parameters. Three interactions suffice to generate a multimap: 1. Competitition in columns and rows to enforce 1-1 mapping within a single map 2. Cooperation of neighboring weights to favor topology within a single map 3. Competition of weights controlling identical links to enforce different transformation parameters between maps 3: Schematic for the interactions of three maps. W m τρ = α + F m τρ W m τρ - W m τρ B m τρ ( α + FW m ) B m τρ (X)= τ X m τ ρ + ρ X m τρ + m =m X m τρ / ( 2N + M - 1 ) Note that competition at same τ and ρ but different m is a local process and can therefore easily be implemented by, e.g., a growth-regulating transmitter. Unstructured initial connectivity Assume all-to-all connectivity between two layers with initial random weights. In this case the symmetry has to be broken slightly in the beginning to guarantee convergence of the control transformations to the same orientation. 4: Left: Random initial conditions with slightly broken intial symmetry. Each colorchannel of a link (r,g,b) codes for the associa- tion with one of the three corresponding control units. Right: Final weight association. Connectivity pre-structured as routing circuit The unstructured all-to-all two-layered system needs an unrealistic number of connections. Therefore, we simulated the proposed algorithm on a multilayer routing circuit with mini- mized connection number [6]. The circuit turns out to be a viable basis for the proposed algorithm and exemplifies the flexibility of the process. 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5: Left: A multilayer routing circuit with minimized connec- tion number but all-to-all connectivity [6] can emerge before mul- timap formation [8]. Middle: The initial receptive fields (RFs) of the three control units. Right: The RFs after self-organization. Summary We showed that translation circuits for invariant recognition can be self-organized prenatally they emerge from an extended map formation mechanism Outlook add other transformations (scalings and rotations) integrate feature space organization postnatal adaptation / learning Acknowledgements Supported by EU projects “Daisy” and “Seco”, the Hertie Foundation and the Volkswagen Foundation. References [1] B. A. Olshausen, C. H. Anderson, and D. C. Van Essen. A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of infor- mation. J Neurosci, 13(11):4700–4719, Nov 1993. [2] Junmei Zhu and Christoph von der Malsburg. Maplets for correspondence-based object recognition. Neural Networks, 17:1311–1326, 2004. [3] L. Wiskott and C. von der Malsburg. Recognizing faces by dynamic link matching. Neuroimage, 4(3 Pt 2):S14–S18, Dec 1996. [4] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381(6582):520–522, Jun 1996. [5] J¨ org L¨ ucke, Christian Keck, and Christoph von der Malsburg. Rapid convergence to feature layer correspondences. Neural Computation, accepted, 2007. [6] Philipp Wolfrum and Christoph von der Malsburg. What is the optimal architecture for visual information routing? Neural Computation, 19(12):3293–3309, 2007. [7] A. F. H¨ aussler and Christoph von der Malsburg. Development of retinotopic projections: An analytic treatment. J. Theor. Neurobiol., 2(47), 1983. [8] Philipp Wolfrum and Christoph von der Malsburg. A marker-based model for the onto- genesis of routing circuits. In Artificial Neural Networks – ICANN 2007, volume 4669 of LNCS, pages 1–8. Springer, 2007.