Multimed Tools Appl
DOI 10.1007/s11042-017-4796-5
Prediction of visual attention with deep CNN
on artificially degraded videos for studies of attention
of patients with Dementia
Souad Chaabouni
1,2
· Jenny Benois-pineau
1
·
Franc¸ois Tison
3
· Chokri Ben Amar
2
· Akka Zemmari
1
Received: 30 September 2016 / Revised: 5 April 2017 / Accepted: 2 May 2017
© Springer Science+Business Media New York 2017
Abstract Studies of visual attention of patients with Dementia such as Parkinson’s Dis-
ease Dementia and Alzheimer Disease is a promising way for non-invasive diagnostics.
Past research showed, that people suffering from dementia are not reactive with regard to
degradations on still images. Attempts are being made to study their visual attention rel-
atively to the video content. Here the delays in their reactions on novelty and “unusual”
novelty of the visual scene are expected. Nevertheless, large-scale screening of population
is possible only if sufficiently robust automatic prediction models can be built. In the med-
ical protocols the detection of Dementia behavior in visual content observation is always
performed in comparison with healthy, “normal control” subjects. Hence, it is a research
question per see as to develop an automatic prediction models for specific visual content to
use in psycho-visual experience involving Patients with Dementia (PwD). The difficulty of
such a prediction resides in a very small amount of training data. In this paper the reaction
of healthy normal control subjects on degraded areas in videos was studied. Furthermore,
in order to build an automatic prediction model for salient areas in intentionally degraded
Souad Chaabouni
souad.chaabouni@u-bordeaux.fr
Jenny Benois-pineau
benois-p@labri.fr
Franc¸ois Tison
francois.tison@chu-bordeaux.fr
Chokri Ben Amar
chokri.benamar@ieee.org
Akka Zemmari
zemmari@labri.fr
1
LaBRI UMR 5800, University of Bordeaux, 33400, Talence, France
2
REGIM-Lab LR11ES48, University of Sfax, 3029 Sfax, Tunisia
3
CHU de Bordeaux-GH Pellegrin, Bordeaux, France