Chapter 13
Perceptron Learning Rule
13.1 Single Layer Perceptron
A single layer perceptron is a simplest form of neural network. This type of neural
network is used for pattern classifications that are linearly separable. Single layer
perceptron consists of one input layer with one or many input units and one output
layer with one or many output units. The neurons are connected to each other by
weights and bias (Gkanogiannis and Kalamboukis (2009, 2010)).
13.2 Architecture of Single Layer Perceptron
The structure or architecture of single layer perceptron is shown in Fig. 13.1. In
Fig. 13.1,X
1
,X
2
, …, X
n
are the input neurons of the input layer. There exists a
common bias b with a value 1. w
1
,w
2
, …, w
n
are the weights connected from input
node to the output node. Y is the output neuron in the output layer (Zurada 1994).
13.3 Algorithm of Single Layer Perceptron
The step-by-step algorithm is given below (Sivanandam 2006):
Step1: Initialize all weights and bias to zero, i.e., w
i
= 0 for i = 1–n, b = 0. Here,
n is the number of input neurons. Let us take α = (0–1) as the learning rate.
Step2: For each input training vector and target output pair, S : t, do steps 2–5.
Step3: Set activation for input units : x
i
= S
i
,i = 1, …, n.
© Springer Nature Singapore Pte Ltd. 2019
S. Chakraverty et al., Concepts of Soft Computing,
https://doi.org/10.1007/978-981-13-7430-2_13
183