TarzaNN : A General Purpose Neural Network Simulator for Visual Attention Modeling Albert L. Rothenstein Andrei Zaharescu John K. Tsotsos Dept. of Computer Science and Centre for Vision Research York University, Toronto {albertlr, andreiz, tsotsos}@cs.yorku.ca Abstract A number of computational models of visual attention exist, but making comparisons is difficult due to the incompatible implementations and levels at which the simulations are conducted. To address this issue, we have developed a general-purpose neural network simulator that allows all of these models to be implemented in a unified framework. The simulator allows for the distributed execution of models, in a heterogeneous environment. Graphical tools are provided for the development of models by non-programmers and a common model description format facilitates the exchange of models. In this paper we will present the design of the simulator and results that demonstrate its generality. 1 Introduction Even though attention is a pervasive phenomenon in primate vision, surprisingly little agreement exists on its definition, role and mechanisms, due at least in part to the wide variety of investigative methods. As elsewhere in neuroscience, computational modeling has an important role to play by being the only technique that can bridge the gap between these methods [1] and provide answers to questions that are beyond the reach of current direct investigative methods. A number of computational models of primate visual attention have appeared over the past two decades (see [2] for a review). While all models share several fundamental assumptions, each is based on a unique hypothesis and method. Each seems to provide a satisfactory explanation for several experimental observations.