Volume 8(3) 124-126 (2015) - 124
J Comput Sci Syst Biol
ISSN: 0974-7230 JCSB, an open access journal
Research Article Open Access
Licata, J Comput Sci Syst Biol 2015, 8:3
DOI: 10.4172/jcsb.1000179
Short Communication Open Access
Keywords: Artifcial neural networks; Feed-forward networks;
Recurrent networks; Brain; Mind
Introduction
Nowadays we are out of the illusion that computers can be good
models for human mind. Te human mind is the result of the bio-
physical structure of a nervous system in a body which evolved to
survive in the environment, in communication with other individuals
of same species and in relationship with other species of the ecosystem:
its power is due to a very long and hard evolution and we are not able
to understand its complexity [1].
(A)Te goal that A.I. should attain is the emulation, through
a computer, of some processes of mind in relationship with the
environment (the world and other individuals).
With respect to this objective I want to underline two obstacles in
neural networks strategy: 1) and 2).
1) Neural networks are a strategy to emulate directly the behavior
of brain and not the behavior of mind. Tus an important problem
that neural network strategy misses is the gap between brain and mind.
Tis is the problem of the translation of states of neuronal activation
in concrete mental activity. Te mind/brain translation problem will
not be overcome until we will not have a clear theory about thought,
consciousness, perception and action as cerebral phenomena.
Moreover, if this theory wants to be useful to neural network strategy
it must be conceived following the neural network philosophy and
language. A theory who speaks the language of neural networks should
consider thought (i.e. mental representations, planning, consciousness,
memory and so on), perception and action not as “states” but as fuxes
of states which go through the network (ordered and structured sets of
states which go through the network). About these fuxes that we, as
thinking brains, perceive in ourselves, we have unclear ideas on their
beginning, on their developing and on their ending, but we know that
perception can generate them.
2) Artifcial neural networks are very poor imitations of brain.
Human brain is a “network” of 100 milliard of neurons in which each
neuron is connected to many thousands of other neurons, so, in a
brain; there are millions of milliards of connections. Tere are many
kinds of structure of neural networks, but the architecture of the most
common neural networks consists in a simple three layers structure of
artifcial neurons, like the three layers “perceptron” of Figure 1, that
henceforth I will call TLP.
Discussion
Neural networks can be feed-forward or feedback networks. In
feed-forward neural networks like TLP the information propagates
in only one direction, from input layer to output layer through the
hidden layer (that can be more than one), and there are no cycles.
Each unit is connected with every unit of the following layer, there
are no connections between units of the same layer or with a unit
of previous layer, and there are no connections which jump one (or
more) layer(s). A feed-forward network simply calculates a function
of input values which depend on the distribution of weights (w) of the
incoming connections and on the activation function of the outgoing
connection. It has not any internal state diferent with respect to the
weights of connections.
In feedback networks (also called ‘recurrent networks’) the
connections are arbitrary. Te Hopfeld network (Figure 2) is a fully
connected graph, typically represented as a matrix of weights; it has
bi-directional connections and symmetrical weights [2]. Tere is no
input or output specifc layers, all neurons are input and output units;
activation levels are only +1 or -1. Tese kinds of network, with very
high redundancy of connections, produce associative memory and
permit the recovery of missing information.
Sometimes human brain behaves as a feed-forward network
with layers, but it has also many connections that lead information
backward to neurons of “preceding layer”, i.e. the brain is a feedback
network in which can be many cycles of neurons. Given that
sometimes the activation goes back to neurons which have caused it,
feedback networks (and the brain) have an internal status memorized
as activation levels of units. In recurrent networks the computation
has much less order with respect to feed-forward networks.Artifcial
feedback networks can become unstable, chaotic or can fuctuate and
it can be very hard to obtain a stable output from a given input; so it is
a mystery how our brain, as feedback network, is able to produce its(so
good) computation.
Te learning process, in a neural network, is commonly understood
*Corresponding author: Gaetano Licata, Chair of Logic and Philosophy of Science,
Dipartimento di Scienze Umanistiche, University of Palermo, Viale delle Scienze Ed.
12, 90128, Palermo, Italy, Tel: 339-456-8136; E - m a i l : ninnilicata@yahoo.it
Received February 23, 2015; Accepted March 11, 2015; Published March 13,
2015
Citation: Licata G (2015) Are Neural Networks Imitations of Mind? J Comput Sci
Syst Biol 8: 124-126. doi:10.4172/jcsb.1000179
Copyright: © 2015 Licata G. This is an open-access article distributed under the
terms of the Creative Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided the original author and
source are credited.
Abstract
Artifcial neural networks are often understood as a good way to imitate mind through the web structure of
neurons in brain, but the very high complexity of human brain prevents to consider neural networks as good models
for human mind;anyway neural networks are good devices for computation in parallel. The difference between
feed-forward and feedback neural networks is introduced; the Hopfeld network and the multi-layers Perceptron are
discussed. In a very weak isomorphism (not similitude) between brain and neural networks, an artifcial form of short
term memory and of acknowledgement, in Elman neural networks, is proposed.
Are Neural Networks Imitations of Mind?
Gaetano Licata*
Gaetano Licata, Chair of Logic and Philosophy of Science, Dipartimento di Scienze Umanistiche, University of Palermo, Italy
Journal of
Computer Science & Systems Biology
J
o
u
r
n
a
l
o
f
C
o
m
p
u
t
e
r
S
c
i
e
n
c
e
&
S
y
s
t
e
m
s
B
i
o
l
o
g
y
ISSN: 0974-7230