Neural Networks 20 (2007) 156–171 www.elsevier.com/locate/neunet The emergence of goals in a self-organizing network: A non-mentalist model of intentional actions Yoram Louzoun a, , Henri Atlan b,c a Math Department, Bar Ilan University, Ramat Gan, 52900, Israel b Hadassah University Hospital, Jerusalem, Israel c EHESS, Paris, France Received 17 November 2005; accepted 8 May 2006 Abstract A model of intentional actions is presented through the operation of two connected neural networks. A deterministic causal recurrent network relates a random initial state to an ordered final state. A perceptron-like, feed-forward network provides a memory mechanism that links the final states to the original initial states. A non-supervised learning mechanism that selects which final states are defined as goals to be retrieved together with initial states leading to them. Causal sequences of states are transformed into procedures directed towards the achievements of goals. We propose a mechanism through which goals and their achievement in goal-directed actions can be emerging properties of self-organizing networks, not initially endowed with intentionality. This allows for a monist, non-mentalist description which does not need to resort to intentional mental states as causes of intentional actions. Cognitive, neurophysiological and philosophical implications are discussed. c 2006 Elsevier Ltd. All rights reserved. Keywords: Goal-directed; Intentions; Mental states; Neural networks 1. Introduction Mental states are described by their semantic contexts, in psychological and linguistic terms. Brain states are described in terms of neural network activity patterns. The two languages, so far, cannot be reduced to one another. The relation between mental states and brain states is the root of the mind–body problem. If one does not accept the dualist division of the world between two substances, extension (or matter) and thought, or body and mind, with the difficult question of the nature of their interactions, one is led to the widely accepted idea in cognitive literature, that mental states are a subset of brain states, endowed with meaningful properties which can be described only in psycho-linguistic terms. The relationship between these states and “ordinary” brain states describable in physical terms of neural activity is often viewed as that of “supervenience” (Davidson, 1970), which sticks to the ontological materialist monism implied by modern science, with a functional dualism where the same function can be Corresponding author. Tel.: +972 3 5317610; fax: +972 3 5317610. E-mail address: louzouy@math.biu.ac.il (Y. Louzoun). implemented by different material substrates. However, this view is not devoid of circularity when it is used to explain intentional or any other meaningful conscious behavior as being caused by intentions or meanings attributed to such mental states. The aim of this article is to get out of the circle, and provide a generic model for the emergence of goal-directed behaviors in neural networks, not initially endowed with intentionality. Thus, some sort of special “intentional” mental activity is not required as the cause of brain states and physical movements in order to explain what we spontaneously attribute to conscious decisions. This view is akin to the description of intentionality as a property of intentional actions rather than of mental representations, which Elizabeth Anscombe opposes as “mentalist” descriptions using vague notions of mental causes, often confused with motives and desires, in her seminal book Intention (Anscombe, 1957). The question of free will, as well as other more general psychological, metaphysical and ethical questions related to human intentionality and responsibility will be left out of the scope of this paper. The model is indeed compatible with a worldview where free will is seen as an illusion based in 0893-6080/$ - see front matter c 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.neunet.2006.05.032