A rule-based machine vision system for fire detection in aircraft dry bays and engine compartments Simon Y. Foo* Department of Electrical Engineering, FAMU-FSU College of Engineering, Florida A and M University, Tallahassee, FL 32310, USA Abstract In this paper, a rule-based machine vision approach is applied to detect and categorize hydrocarbon fires in aircraft dry bays and engine compartments. Images for computer analysis are provided by charge-coupled device imaging sensors placed inside dry bays and engine compartments. Using a set of heuristics based on statistical measures derived from the histogram and image subtraction analyses of successive image frames, we showed that it is possible to detect and categorize life-threatening fires from non-fire/non-lethal events accurately in sub-millisecond response time. Specifically, the median, standard deviation, and first-order moment statistical measures of the histogram data of each image frame are used to confirm the presence or absence of fire. Concurrently, another set of mean, median, and standard deviation statistical measures from the image subtraction of two successive frames are used to determine the growth and subse- quently reaffirm the existence of a fire. This approach is also tested for false alarms such as those due to flashlights and high-power halogen lights. 1997 Elsevier Science B.V. Keywords: Machine vision; Fire detection; Rule-based expert system 1. Introduction The harsh environment of an aircraft dry bay and engine compartment makes accurate fire detection and response a difficult task. Fires onboard aircraft may be caused by impacts of projectiles, electrical sparks, etc. The combina- tion of heat, fumes, and oil from the hydraulics, fuel lines, etc., inside an aircraft dry bay can be accounted for most false alarms or failures of conventional smoke/heat sensors. For example, due to the heat generated from the running engines and the lack of proper ventilation, the temperatures of blackbodies inside an engine compartment can soar to over 350°F, making it impossible for heat sensors such as infrared sensors to distinguish a normal operating environ- ment from an actual fire. Other conventional methods for fire detection such as smoke sensors have too slow response times. Recently, visible-spectrum machine vision systems have been developed to detect and characterize hydrocarbon fires [1]. Visible characteristics of fires such as brightness, color, spectral texture, spectral flicker, and stationarity are used to discriminate them from other visible stimuli. (Note, how- ever, non-hydrocarbon flames such as those from alcohol- based agents will not show up on a visible spectrum CCD imaging device). Using a CCD imaging system, the question is: can we differentiate a hydrocarbon fire from harmless (but very bright) light sources such as halogen light and sunlight, and if a fire is detected, can we categorize the fire in terms of the growth pattern? Although not crucial, the latter is desirable. A constant small flickering flame is less likely to be life-threatening than a fast-growing fire which may engulf the whole engine compartment or dry bay in a few seconds. Subsequently, a proper fire character- ization would enable appropriate response in terms of the right amount of ozone-depleting halon or other chlorofluor- ocarbon (CFC) fire suppressants being released into the environment. Therefore a real-time foolproof fire detection and characterization mechanism is a very essential part of a fire alarm system, especially in critical missions. Frame processes using information from two or more frames can be utilized to determine if a fire is present in an image and the extent of the fire. Frame processes as part of a machine vision system have been used in many indus- trial applications such as security, quality control, etc. [2]. In security applications, frame processes can be used to detect motion and therefore intruders [3,4]. If each frame from a video camera is compared to the previous frame, any move- ment or changes within the field-of-view of camera can be detected. The common method of comparison between two sequential video images is the subtraction of the perfectly aligned pixels, followed by a thresholding process, and finally a pixel difference tally. If the tally of pixels that 0950-7051/97/$17.00 1997 Elsevier Science B.V. All rights reserved PII S0950-7051(96)00005-6 * Fax: +1 904 487 6479; e-mail: foo@eng.fsu.edu