Sofien Chtourou, Mohamed Chtourou, and Omar Hammami Abstract—Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the non- linear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique. Keywords—Address, data set, memory, prediction, recurrent neural network. I. INTRODUCTION MBEDDED systems are widespread in numerous applications and support increasingly complex applications. These embedded software applications are diverse and exhibit varying behavior. This variability come from the nature of the applications or the data they process but also from the supporting hardware components composing embedded systems. This variability runs against stringent real time constraints and therefore it should be smooth out or remove when possible. Cache memories are small memories used by microprocessors to store temporal data and code in order to avoid accessing the central memory. Cache memory access S. Chtourou. is with the National Engineering School of Sfax, Sfax, 3038 Tunisia. He is now with Ecole Nationale supérieure de techniques avancées. (corresponding author to provide phone: 33-(0)1-45525425; fax: 33-(0)145528327; e-mail: chtourou@ ensta.fr). M. Chtourou. is with the National Engineering School of Sfax, Sfax, 3038 Tunisia (corresponding author to provide phone: 216.74.274.088 ; fax: 216.74.275.595; e-mail: mohamed.chtourou@enis.rnu.tn). O. Hammami. is with the Ecole Nationale Supérieure de Techniques Avancées, Paris, 75739 France (corresponding author to provide phone: 33- (0)1-45525424; fax: 33-(0)145528327; e-mail: hammami@ ensta.fr). from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. This paper is organized as follows. In the next section, we present related work performed in prediction based on artificial neural networks, recurrent neural networks and work on optimizing prediction results. In section III, we briefly introduce recurrent neural networks applied on predicting time series. The section IV shows prediction results while using a single recurrent neural network. In section V, we propose a hybrid prediction scheme. Finally, section VI we conclude. II. RELATED WORK Data prefetching is an important research topic in traditional computer architecture studies [1-9][12] especially for multimedia workload. However, most proposed techniques follow very simple schemes with very limited adaptivity. Data prefetching remains an open issue in the general case. Our work raises several issues: (1) which neural network based technique is the most appropriate for large (millions of elements) time series? What is the maximum number of elements, which can be predicted at any time step? What should be the history used to predict the future memory accesses? To the best of our knowledge no paper has ever addresses the issue of neural network based prediction on large time series from memory addresses. In our work, we focus on optimization of neural network performance to predict the next instruction addresses. In order to improve prediction results, many works propose novel architectures for neural network based prediction. Parlos and al [11] propose a novel recurrent architecture based on multilayer percepton model with a modified learning algorithm. Owens and al [28] present a comparative study between many neural architectures based prediction notably the feedforward NAR (Nonlinear AutoRegressive) model and the fully recurrent architecture. The optimization of the neural network architecture is a limited solution to learn large and multi-variant data sets. In [13], the authors introduce a novel algorithm that trains a neural network to identify chaotic dynamics from a single measured time series. In this work, tested time series present chaotic dynamics but the length of the different time series is Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications E World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:1, No:12, 2007 4032 International Scholarly and Scientific Research & Innovation 1(12) 2007 scholar.waset.org/1307-6892/4048 International Science Index, Computer and Information Engineering Vol:1, No:12, 2007 waset.org/Publication/4048