To efficiently interact with another agent in solving a mutual task a robot should be endowed with cognitive skills like memory, decision making, action understanding and prediction. We propose a control architecture which is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behaviour in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. The control architecture is here validated in a task in which two mobile robots have first to search for objects in a cluttered environment and subsequently transport them to a common construction place. A Dynamical Field Model for Joint A Dynamical Field Model for Joint Action in Autonomous Robots Action in Autonomous Robots E. Bicho 1 , N. Hipólito 1 , L. Louro 1 , S. Cambon 1 , W. Erlhagen 2 1 {estela,nhipolito,llouro,scambon}@dei.uminho.pt; 2 wolfram.erlhagen@mct.uminho.pt Abstract Preamble/ Context JAST project Construction Task Results Task requirements & constraints of the robot team First, the robots have to search, select and transport particular objects to the construction area Then, the robots have to assemble a robot platform. Robots have no prior knowledge of the environment Workspace is cluttered with obstacles No explicit communication • Memory • Forgetting • Predictive perception • Anticipation • Elementary action understanding • Decision making Cognitive Functions The Dynamical Field Model ( ) ( ) ( ) ( ) ( ) ( ) ( ) 360 0 , , , , ' ', ' i i i i i i i u t u t h t S t w f u t d t φ τ φ φ φ φ φ φ φ ∂ =− + + + − ∂ ∫ Neural activation in each layer u i (f) ( i= STS, WM, Goal) evolves continuously in time as a field dynamics: i = STS, WM, Goal specification absence of information graded information Direction, φ F i e l d a c t i v a t i o n Joint search: importance of Working Memory Robots Joint search: Prediction and anticipation based on initial estimate Visual information partner Inhibitory excitatory direction time WM layer Input to WM layer STS layer Input to Goal layer Goal layer Visual information objects T1 hidden T2 T1 Input to STS layer Visual information objects Visual information partner Inhibitory excitatory direction time WM layer Input to WM layer Input to STS layer STS layer Input to Goal layer Goal layer Partner robot hidden behind wall Anticipatory wave Example: Selecting hidden target Occluder paradigm, Dumbo Jumbo Toy Robot Layer STS (Superior Temporal Sulcus): Visual description of the motion displayed by the partner robot PFC (Prefrontal cortex): - Working Memory (WM) layer : Information about objects (e.g. location, properties) - Goal Layer: Internal goal representation guides forthcoming action sequence PreMC (Premotor cortex): Goal directed action sequence The importance of working memory is illustrated. Snapshots at different points in time show the activity (red line) of the three dynamic fields STS, WM and Goal together with the input (pink line) in the control architecture of robot R1. At time t 0 , R1 can see both T1 and T2 (targets) and their location is represented in WM and selects T2 (goal layer) which is nearer. At t 1 object T1 becomes out of sight. Nevertheless its position is still represented in WM by a peak of activation (although decreased). The moving pulse in STS representing the motion of the partner robot end up inhibiting the decision to move toward object T2 (time t1 to t2) and a switch in the decision arises. R1 moves now toward the occluded object (t3 to t5) while R2 maintains the decision to grasp object T2. From t 5 to t 7 both robots move to the construction area. t 1 t 2 t 3 t 0 t 4 t 5 t 6 t 7 t 3 t 3 t 0 t 1 t 2 t 4 t 5 t 6 T 7 Here the capacity to represent motion information, is STS, about the partner robot even when it is hidden from view (occluder paradigm), is illustrated. Initially robot R1 moves toward T1 as represented in the Goal layer. From t1 to t2 robot R2 that is also moving toward object T1 disappears behind a wall. The peak of activation is STS persists. This self-stabilized wave representing motion of the partner (although hidden) ends up inhibiting the decision to grasp object T1 and the robot R1 switches its decision to object T2 (represented by a peak of activation in the Goal field centred at the direction of T2 at time t2. After picking the objects both robots bring them to construction area. Joint search and transport Assembling the toy Joint action sub-tasks: Stable localized firing pattern of neurons encoding the Goal direction φp Goal direction Heading direction of the robot References •W . Erlhagen, A. Mukovskiy, E. Bicho, “A dynamic model for action understanding and goal-directed imitation”, Brain Research 1083: 174-188, 2006 •W . Erlhagen, A. Mukovskiy, E. Bicho, G. Panin, C. Kiss, A. Knoll, H. van Schie, H. Bekkering, “Goal-directed imitation in robots: a bio-inspired approach to action understanding and skill learning”, Robotics and Autonomous Systems, in press • W. Erlhagen, G. Schöner, “ Dynamic field theory of motor preparation”, Psychological Review, 109:545-572, 2002. • E.Bicho, P.Mallet, G.Schöner, “Target representati on on an autonomous vehicle with low-level sensors”, The International Journal of Robotics Research, 19:424-447, 2000. • L. Fogassi, P.F. Ferrari, B. Gesierich, S. Rozzi, F.Chersi and G. Rizolatti, “Parietal lobe: from action organization to intention understanding”, Science, 308:662-667, 2005 • V. Gallese and A. Goldman, “Mirror neurons and thesimulation theory of mind-reading”, Trends in Cognitive Science, 2:493-501, 1998 • T. Jellema and D.I. Perrett, “Coding visible and hi dden actions”, In W Prinz and B Hommel, editors, Attention and Performance, XIX: Common mechanisms in perception and action, pp. 267-290, Oxford University Press, 2002 • E.K. Miller, “The prefrontal cortex and cognitive control”, Nature Reviews, 1:59-65, 2000 • G. Schöner, M. Dose, C. Engels, “Dynamics of behav iour: theory and applications for autonomous robot architectures”, Robotics and Autonomous Systems, 16: 213-245, 1995 Each module in the control architecture consists of several interconnected layers which represent the basic functionality of neural populations STS Partner Motion - Memory - Forgetting - Prediction - Anticipation Robot Architecture Command G1 Search & Transport Objects G2 Assembling toy Goal-directed sequence Motor Primitives Motion Planning Visual Module - Omnidirectional camera - PTU mounted camera Visual Module - PTU mounted camera - Hand wireless camera STS Partner Motion - Memory - Forgetting - Prediction - Anticipation PFC Goal Layer Decision Level Joint Transportation Transport alone PreMC Objects location Objects properties WM High level goals PFC Objects location Objects properties WM Goal Layer Decision Level SubW1 SubP SubB1 SubW&P Focus on this Poster