Sei sulla pagina 1di 10

IJIRST International Journal for Innovative Research in Science & Technology| Volume 3 | Issue 04 | September 2016

ISSN (online): 2349-6010

Real Time Fire and Smoke Detection using Multi-


Expert System for Video-Surveillance
Applications
Jesny Antony Prasad J. C.
PG Student Associate Professor
Department of Computer Science & Information Systems
Department of Computer Science & Engineering
Engineering
Federal Iinstitute of Science and Technology, Mookkannoor Federal Iinstitute of Science and Technology, Mookkannoor
PO, Angamaly, Ernakulam, Kerala 683577, India PO, Angamaly, Ernakulam, Kerala 683577, India

Abstract
Fire and smoke Detection system plays an important role in surveillance and security systems. These systems are primarily
designed to warn the occupants of fire so that they may safely evacuate the premises. The detection of smoke will help for the
early detection of fire and helps to reduce the loss that caused from this kinds of accidents. In this paper, we propose a method,
which detects fire and smoke areas by analyzing the videos that are acquired by surveillance cameras in real-time. The purpose is
achieved by using a Multi-Expert System, which takes the evaluation of multiple experts such as color, movement and shape
features separately and evaluated. The combined result shows a high performance compared with other existing systems. The
method which is tested using a large number of fire and smoke videos, which is taken from different datasets as well as web and
it gives a high accuracy with other systems.
Keywords: Fire Detection, Sift-Matching, RGB & YUV Color Space, Multi-Expert System
_______________________________________________________________________________________________________

I. INTRODUCTION

Fire detection systems play an important role in security systems. Fire detection systems are primarily designed to warn
occupants of a fire so that they may safely evacuate the premises. Correctly maintained and operating systems are effective and
proven life saving devices. Failure to take advantage of this early warning, due to poor performance of an automatic fire
detection system, has cost people their lives. There should take care about the fire detection and other redemption measures. It is
important especially in large building, metropolitan cities, where there is large crowd of peoples. Systems that are not properly
installed or maintained may cause unwanted detection activations. This has a negative effect on occupants responses to genuine
detections and as a result downgrades their effectiveness.
There are a number of these detection systems are available to detect these fire and smoke. Fire detection by analyzing videos
acquired by surveillance cameras gives better efficiency in the detection process. This scientific effort focused on improving the
robustness and performance of the proposed approaches, so as to make possible a commercial exploitation.
This fire and smoke detection is performed by analyzing several features such as color, shape, movement and several other
features. So that a strict classification of these methods are not possible. The two main classes can be distinguished, depending
on the analyzed features: color based and motion based. The methods using the first kind of features are based on the
consideration that a flame, under the assumption that it is generated by common combustibles, such as wood, plastic, paper, or
others, can be reliably characterized by its color, so that the evaluation of the color components in RGB (Red, Green, Blue),
YUV (Luminance, Chrominance) or any other color space is adequately robust to identify the presence of flames. This simple
idea inspires several recent methods: for instance, in some algorithms fire pixels are recognized by an advanced background
subtraction technique and a statistical RGB color model: a set of images have been used and a region of the color space has been
experimentally identified so that if a pixel belongs to this particular region, then it can be classified as fire. The main advantage
of such algorithms lies in the low computational cost allowing the processing of more than 30 frames/s at Quarter Common
Intermediate Format (176 X 144) image resolution.
The main problem for the RGB and HSV based approaches is that they are particularly sensitive to the changes in brightness,
thus causing a high number of false positive due to the presence of shadows or to different tonalities of the red. This problem can
be mitigated by switching to a YUV color space. In this, a set of rules in the YUV space has been experimentally defined to
separate the luminance from the chrominance more effectively than that in RGB, so as to reduce the number of false positives
detected by the system. Another algorithm using this YUV color space is information coming from the YUV color is combined
using a fuzzy logic approach to consider the implicit uncertainties of the rules introduced for thresholding the image. A
probabilistic approach based on YUV has been also exploited, where the thresholding of potential fire pixels is not based on a
simple heuristic but instead on a support vector machine (SVM), able to provide a good generalization without requiring problem

All rights reserved by www.ijirst.org 203


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

domain knowledge. Although this algorithm is less sensitive to variations in the luminance of the environment, its main
drawback compared with other color based approaches lies in the high computational cost required as soon as the dimensions of
the support vector increase.
A video is a sequence of a number of frames. These frame-to-frame changes are analyzed and the evolution of a set of features
based on color, area size, surface coarseness, boundary roughness, and skewness is evaluated by a Bayesian classifier. The wide
set of considered features allows the system to consider several aspects of fire, related to both color and appearance variations,
thus increasing the reliability in the detection. The thresholding on the color, performed in the RGB space, is improved by a
multiresolution 2-D wavelet analysis, which evaluates both the energy and the shape variations to further decrease the number of
false positive events. In particular, the shape variation is computed by evaluating the ratio between the perimeter and the area of
the minimum bounding box enclosing the candidates fire pixels. This last strategy is as simple and intuitive as promising if the
scene is populated by rigid objects, such as vehicles. On the other side, it is worth pointing out that the shape associated with
nonrigid objects, such as people, is highly variable in consecutive frames, think for instance, to the human arms that may
contribute to significantly modify the size of the minimum bounding box enclosing the whole person. This evidence implies that
the disordered shape of the person may be confused with the disordered shape of the fire, thus consistently increasing the number
of false positives detected by the system.
Another important idea of combining several classifiers to obtain a more reliable decision has been generalized and it will
give a good performance. Fire-colored pixels are identified using a hidden Markov model, temporal wavelet analysis is used for
detecting the pixel flicker, spatial wavelet analysis is used for the non-uniform texture of flames, and finally, wavelet analysis of
the object contours is used to detect the irregular shape of the fire. The decisions taken by the above mentioned algorithms are
linearly combined by a set of weights that are updated with a LMS strategy each time a ground-truth value is available. This
method has the advantage that during its operation, it can exploit occasional feedback from the user to improve the weights of the
combination function. However, a drawback is the need to properly choose the learning rate parameter to ensure that the update
of the weights converges and that it does so in a reasonable time.

II. RELATED WORKS

Color based detection is done by comparing the color of fire pixels with color of candidate pixels. In [1] T. Celik, H. Demirel, H.
Ozkaramanli, and M. Uyguroglu, proposes a method, a real-time fire-detector that combines foreground object information with
color pixel statistics of fire. Foreground information is extracted using adaptive background subtraction algorithm which
segments the fire candidate pixels from the background. Simple adaptive background model of the scene is generated by using
three Gaussian distributions, where each distribution corresponds to the pixel statistics in the respective color channel. Then it
verify with the statistical color model, which is generated by statistical analysis of sample images, to detect the fire. The
algorithm is having a 98.89 % of detection rate and have a very less computational cost. But the system is more complex and
smoke will not detected. This are sensitive to the changes in brightness, thus causing a high number of false positive due to the
presence of shadows or to different tonalities of the red.
In [2] Y.-H. Kim, A. Kim, and H.-Y. Jeong proposed a method on wireless sensor networks are used for the application of
monitoring forest fires, and developed the fire bug system. These networks used the mobile agents to find the fire source which
made the network with greater flexibility. The main steps the system includes detection of moving pixels or regions in the
current frame of a video. Color detection of moving pixels and Blob analysis. Application layer gateway will deliver the sensor
data from the link layer of wireless sensor network protocol to the network layer of IP protocol over the Internet. The system is
having a less computational cost. But this fire detection on wireless sensor networks, is still in the laboratory stage. And these are
sensitive to the changes in brightness, thus causing a high number of false positive due to the presence of shadows or to different
tonalities of the red.
In [3] T. elik and H. Demirel, a rule based generic color model for flame pixel classification is proposed. The proposed
algorithm uses YCbCr color space to separate the luminance from the chrominance more effectively than color spaces such as
RGB or rgb. The performance of the proposed algorithm is tested on two sets of images, one of which contains fire, the other
containing fire-like regions. The method proposes to use the YCbCr color space to construct a generic chrominance model for
flame pixel classification. The steps in the system include, initially pick sample images and then segment its fire pixels with
green color. Then we calculate mean values of R, G, and B planes in the segmented fire regions of the original images. It is clear
that, on the average, the fire pixels show the characteristics that their R intensity value is greater than G and G intensity value is
greater than the B. The proposed method achieves up to 99% fire detection rate and having lower false alarm rate. But the
disadvantage is Illumination dependence, means that if the illumination of image changes, the fire pixel classification rules
cannot perform well.
In [4] T. elik, H. Ozkaramanli, and H. Demirel, proposes logic enhanced generic color model for fire pixel classification. The
model uses YCbCr color space to separate the luminance from the chrominance more effectively than color spaces such as RGB
or rgb. Concepts from fuzzy logic are used to replace existing heuristic rules and make the classification more robust in
effectively discriminating fire and fire like colored objects. Further discrimination between fire and non fire pixels are achieved
by a statistically derived chrominance model which is expressed as a region in the chrominance plane. The performance of the
model is tested on two large sets of images which are one set contains fire while the other set contains no fire but has regions
similar to fire color. The model achieves up to 99.00% correct fire detection rate with a 9.50% false alarm rate. The classification

All rights reserved by www.ijirst.org 204


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

more robust in effectively discriminating fire and fire like colored objects. But the system is more complex and computationally
expensive too.
In [5] Y. Habiboglu, O. Gnay, and A. E. etin, proposes a novel descriptor based on spatio temporal properties is introduced.
First, a set of 3-D blocks is built by dividing the image into 1616 squares and considering each square for a number of frames
corresponding to the frame rate. The blocks are quickly filtered using a simple color model of the flame pixels. Then, on the
remaining blocks, a feature vector is computed using the covariance matrix of 10 properties related to color and to spatial and
temporal derivatives of the intensity. Finally, an SVM classifier is applied to these vectors to distinguish fire from non-fire
blocks. The main advantage deriving from this choice is that the method does not require background subtraction and thus can be
applied also to moving cameras. However, since the motion information is only considered by considering the temporal
derivatives of pixels, without an estimation of the motion direction, the system, when working in non-sterile areas, may generate
false positives due to flashing of red lights.
In [6] B. C. Ko, K.-H. Cheong, and J.-Y. Nam proposes a new vision sensor-based fire-detection method for an early-warning
fire- monitoring system. The different steps include candidate fire regions are detected using methods, like the detection of
moving regions and fire-colored pixels. Next, since fire regions generally have a higher luminance contrast than neighboring
regions, a luminance map is made and used to remove non-fire pixels. Thereafter, a temporal fire model with wavelet
coefficients is created and applied to a two-class support vector machines (SVM) classifier with a radial basis function (RBF)
kernel. The SVM classifier is then used for the final fire-pixel verification. To detect fire pixels proposed method uses the red
channel threshold, which is the major component in an RGB image of fire flames and saturation values. After determining the
candidate fire pixels, the non-fire pixels are removed by analyzing the frame difference between two consecutive images.
Moving object that has a similar color to a real fire region are done using temporal luminance variations. The remaining pixels
are classified as fire regions or non-fire regions using SVM. The system uses relatively low cost for equipments and having fast
response time and fast confirming through the surveillance monitor And also there occurs problems due to temporal variation of
pixel are eliminated, which is more robust to noise, such as smoke, and subtle differences between consecutive frames. But the
miss rate in fire regions are detected in some cases and also computation time is high.
In [7] C. Yu, Z. Mei, and X. Zhang proposes an algorithm based on foreground image accumulation and optical flow of the
video. Accumulation images are calculated of the foreground images which are extracted using frame differential method. The
flame regions are recognized by a statistical model build by foreground accumulation image, while the optical flow is calculated
and a motion feature discriminating model to recognize smoke regions is used. The algorithm detect mainly in 3 cases which are
fire with flame and none smoke, fire with smoke and none flame and fire with both flame and smoke. In the system, foreground
image accumulation help to suppress the noise in the video. But, most disturbances like lights and other fire-like color objects
can be differentiate from flame efficiently using foreground accumulation image, except that the lights are turned on and off in a
same frequency like flame which is found.
Color based evaluation will help to identify the fire pixels but, it will not be that much accurate in highly populated areas. This
is due to these pixels are taken only based on their color. There can be things which matches with the same color of fire. So if we
are adding the feature, movement along with color based evaluation, it will be more accurate. This is based on the assumption
that the fire pixels will be continuously moving nature. So it will be comparing each frame with its previous and next frame for
the analysis. So that the fire detection will be little more accurate.
In [8] Xiaojun Qi and Jessica Ebert proposes an algorithm not only uses the color and movement attributes of fire, but also
analyzes the temporal variation of fire intensity, the spatial color variation of fire, and the tendency of fire to be grouped around a
central point. A cumulative time derivative matrix is used to detect areas with a high frequency luminance flicker. The fire color
of each frame is aggregated in a cumulative fire color matrix using a new color model which considers both pigmentation values
of the RGB color and the saturation and the intensity properties in the HSV color space. A region merging algorithm is then
applied to merge the nearby fire colored moving regions to eliminate the false positives. The spatial and temporal color
variations are finally applied to detect fires. The proposed system is effective in detecting all types of uncontrolled fire in various
situations, lighting conditions, and environment. It also performs better than the peer system with higher true positives and true
negatives and lower false positives and false negatives. Due to, the system does not considering the shape variations, reduced
efficiency.
In [9] A. Rahman and M. Murshed proposes a system for detecting the presence of multiple dynamic textures in an image
sequence by establishing a correspondence between the feature space of dynamic textures and that of their mixture in an image
sequence. Image sequences of smoke, fire, etc. are known as dynamic textures. Accuracy of our proposed technique is both
analytically and empirically established with detection experiments yielding 92.5% average accuracy on a diverse set of dynamic
texture mixtures in synthetically generated as well as real-world image sequences. Method is computationally inexpensive. This
feature-based detection method, when coupled with an efficient segmentation method, will facilitate the deployment of the
recognition process in real time. The proposed technique does not prohibit using global motion compensation to work with
sequences captured by a moving camera.
In [10] B. U. Treyin, Y. Dedeoglu, U. Gdkbay, and A. E. etin proposes a novel method to detect fire and/or flames in real-time
by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color
clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame
boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing
the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity

All rights reserved by www.ijirst.org 205


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. It drastically reduces the
false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. The
algorithm not only uses color and temporal variation information, but also checks flicker in flames using 1-D temporal wavelet
transform and color variation in fire-colored moving regions using 2-D spatial wavelet transform. But Methods based on only
color information and ordinary motion detection may produce false alarms in real scenes where no fires are taking place.
In [11] Pasquale Foggia, Alessia Saggese, and Mario Vento proposes a real time fire detection by analyzing videos acquired
by surveillance cameras. The two main novelties have been introduced. First, complementary information, based on color, shape
variation, and motion analysis, is combined by a multi-expert system. And second is a novel descriptor based on a bag-of-words
approach has been proposed for representing motion. The system which includes different stages such as, moving object
detection, Background subtraction , Connected component labeling analysis, Blobs Evaluation and MES Classifier (Decision
making). Blob evaluation is done taking different features such as color, shape and movement. The main advantage of the
proposed approach is real time fire detection, better performance and computationally low cost. Here, in this smoke detection is
not performed.

III. PROPOSED WORK

An overview of the proposed method is explained in Fig.1. The Moving object detection is the first step and that is done by using
Adaptive background subtraction algorithm. Once the moving objects are detected in the scene, a model of background is
maintained and each time, updated model is created. Background updating is performed, so as to deal with the changes that are
occurred in the scene. Finally, the blobs are detected and finalized. Blobs are portions in the screen, in which each one is
associated with at least one object, which is obtained by connected component labeling analysis.
Once the blobs are obtained, the blobs are the input to different experts, which will be used for the detection process. The three
experts are evaluated separately and the result of that is given to Multi-expert system. The main features of the fire as well as
smoke are color, shape and movement. These main features are used for the analysis. The experts are 1. Color Evaluation (CE),
2. Movement Evaluation (ME), 3. Shape Evaluation (SE). The final decision is taken by the Multi-expert system, by analyzing
the output of different experts.

Fig. 1: Block diagram of the proposed method

Moving Object Detection


Moving object detection is performed by adaptive background subtraction algorithm[12]. Moving objects are first detected,
which is based on the fact that fire and smoke are continuously moving in nature. So that, we need to analyses only the portions
which is having changes. Background subtraction mainly aims to detect the changes in the image sequences. That is, detecting
foreground to separate the changes taking place in the foreground of the background.
The continuously moving dynamic nature of the fire and smoke is used here to detect the portions of fire. So, in the first step,
adaptive background subtraction method will be used to get regions of moving objects, which are candidates for the multi expert
evaluation According to this method, a pixel is moving if:
I ( x , y , n ) I ( x , y , n 1) Th ( x , y , n ) (1)
Where I(x,y,n) means intensity of pixel (x,y) in nth frame I, and I(x,y,n-1) means intensity of pixel (x,y) in (n-1) th frame of I.
Th(x,y,n) is an adaptive threshold calculated according to:
Th(x, y, n 1) u Th(x, y, n) (1 - u)(r | I(x, y, n) - B(x, y, n) |) If pixel (x, y) is stationary
Th(x, y, n), If pixel (x, y) is moving (2)

All rights reserved by www.ijirst.org 206


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

Where, r is a real number greater than one, the update parameter u is a positive number close to one and B(x, y, n) is the
background image. R value is set 1.2 and u value set to 0.5 initially. Initial threshold Th(x, y, n) values are set to a pre-
determined non-zero value and in our system, it set as 0.5 for the entire scene.
An adaptive background is estimated as:
B(x, y, n 1) u B(x, y, n) (1 - u) I(x, y, n) If pixel (x, y) is stationary
B(x, y, n), If pixel (x, y) is moving (3)
Where, B(x, y, n) is the intensity of the estimated background related to the frame n at the position (x, y). The update
parameter u is a positive real number close to one and is set to 0.5 in our system. Initially, B(x, y, 0) is set to the first image
frame I(x, y, 0). Finally an estimated background image is subtracted from the current image to detect moving regions as:
I ( x , y , n ) B ( x , y , n ) Th ( x , y , n ) (4)
By this, we will find out the candidate pixels for the further evaluation. Here, all the non-stationary pixels are taken as
candidate for the evaluation and that are called blobs which is used by the Multi-expert system to analyze.
Color Evaluation
Color Evaluation plays an important role in the detection of both fire as well as smoke. The unique color of fire and smoke which
will be taken and analyzed. For the fire, expert evaluates its color in YUV color space. YUV has been widely used in these kinds
of analysis, since it separates the luminance from the chrominance and so is less sensitive to the changes in brightness.
This expert analyses the blobs based on six different rules. The analysis is based on YUV colorspace. Even though, in RGB
color space, it should satisfy the condition:
R(x; y) G(x; y) B(x; y)
In YUV color space, it checks the values of chrominance and luma components. Chrominance is the signal used to convey the
color information of the picture and luma represents the brightness of the image.
The six rules used for the color evaluation includes:
r1 : Y (x, y) U(x, y)
r2 : V (x; y) U(x; y)
1
.
N
r3 : Y (x, y) Y (xk , yk )
k 1
N
1
.
N
r 4 : U (x, y) U (xk , yk )
k 1
N
1
.
N
r 5 : V (x, y) V (xk , yk )
k 1
N
r 6 : | V (x, y) . U(x, y) | c
N is the total number of pixels in the image.
All the pixels which satisfy all the above rules are shortlisted and that pixels are given to the multi-expert system and it
indicates that the shortlisted pixels are fire pixels. It takes them AND operation on all the rules and the resulted blobs are given to
Multi-expert system.
Movement Evaluation
Movement based evaluation is performed using sift matching. The algorithm, which takes adjacent two images and matching, is
performed. The matching is performed by using the parameter distRatio. That is, it keep matches in which the ratio of vector
angles from the nearest to second nearest neighbor is less than the distRatio. And our algorithm uses 0.6 as distRatio. The match
will be accepted is distance is less than distRatio times the distance to the 2nd closest match. The algorithm draws the lines
connecting the matched key points.
The frequently changing nature of the fire and smoke behavior is makes use here. An example of how the sift matching looks
like is shown in Fig.2.
After performing sift matching, we will get a number of key points as matched points. That points are given for classification.
The main assumption used for designing the classifier is the evidence that the obtained feature vector is different for the two
classes; in presence of fire, the movement is disordered, determining the occurrences of the words rather homogeneously
distributed.

All rights reserved by www.ijirst.org 207


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

Fig. 2: Sift Matching

The measure of the homogeneity hm of the histogram is given as:


max( H )
hm 1

|H |
hk
k 1

If this value of homogeneity is greater than a threshold value, then the input is classified as fire and that pixel values are given
to the Multi-expert system for further evaluation. The case of shape and movement evaluation both will be same for fire and
smoke evaluation. Because the features based on movement and shape are same for fire and smoke.
Shape Evaluation
Shape based evaluation is based on the assumption that, the shape of the flame is continuously moving in nature like the
movement evaluation. It analysis the changes in the shape of blob in two consecutive frames.
For each blob, algorithm computes perimeter-area ratio which is the indicator of shape complexity.
Pt
rt
At

Where Pt is the perimeter of each blob and At of the minimum bounding box enclosed it.
The shape variation svt is evaluated by comparing the shape measure that is computed in the frame t with the one that is
computed at the previous frame (t-1).
t rt rt 1
sv
rt

svt
The score is analysed and if that value is greater than a given threshold, then the class fire is given to that particular blob
and given to Multi-expert system for further analysis.
Multi-Expert System
Multi-expert system is the decision maker in our complete system. This system will make the decision of the detection of fire
or/and smoke in the video. It combines the result of all the experts such as color, movement and shape experts and analyses and
makes the decision.
On this, the color evaluation value is having the highest accuracy compared with the other two experts evaluation. In some
videos, shape evaluation may give very low accurate result, on that case movement and color evaluators are analyzed and take
the decision.
By analyzing different methods available in this fire detection and other detection processes, it will most robust is the use of
combined classifiers by using weighted voting rule. On that, each expert will get a chance to express its vote. Suppose both CE
and the ME classify the blob b as fire and that the percentage of fires correctly detected on the training set is 0.8 and 0.7 for the
two experts, respectively. Then, the two experts votes for the class fire will be weighted 0.8 and 0.7.

IV. EXPERIMENTAL RESULTS AND ANALYSIS

The method of fire and smoke detection is performed on number videos that are taken from mivia dataset which is publically
available as well as a number of videos that are collected from web and manually created ones. This Mivia dataset which have
videos of fire and smoke in different conditions in different places. The complete system was implemented in MATLAB 2014
prototype.
Experimental Results
Most of the methods which are available for the fire and smoke detection are for the detection for images not for video. Some of
the other methods that are available for video, which only uses the feature color for the detection process. Pasquale Foggia,
Alessia Saggese, and Mario Vento[11] implemented a method for real time fire detection for video-surveillance applications
using a combination of experts based on color, shape, and motion. That method was comparatively high performance compared

All rights reserved by www.ijirst.org 208


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

to all the other methods. On that method it uses, three kinds of experts for taking the decision of presence of fire or not.it takes
the decision using weighted voting rule, in which a decision from each expert is given to multi-expert system and the combined
decision should be taken by the multi-expert system.
In our system, the detection of fire and smoke are performed using the multiple experts such as color, movement and shape.
The system which detects different outputs such as, presence of fire; presence of smoke; and presence of both fire and smoke.
The system is tested using a number of fire and smoke videos as well as videos which does not have these too. The Figure 3(a)
shows a single frame of the video which have both smoke and fire in the same frame. Figure 3(b) shows the frame in YUV color
space. That is, for fire detection we convert our RGB frames to YUV color space. YUV has been widely used in these kinds of
analysis, since it separates the luminance from the chrominance and so is less sensitive to the changes in brightness.
Initially the system detects the moving regions in the given video. Moving region detection is done by using adaptive
background subtraction algorithm. By this algorithm, it detects the regions which will further undergoes for the different expert
evaluations. From the Fig. 3(b), the moving objects are detected. That are termed as blobs and done by adaptive background
subtraction algorithm. And the blobs that are detected is shown in Fig 4.

(a) (b)
Fig. 3(a): A frame in the video (b): frame in YUV colospace

Fig. 4: Blobs detected after adaptive background subtraction algorithm.

By this step, a number of blobs are identified. This blobs, each will be containing at least one object is obtained by doing
connected component labeling analysis. The Figure 5(a),(b),and (c) shows the blobs that are detected in a frame in the given
video after CE,MV, and SV respectively. After all the experts who are evaluated individually, combination of all the expert is
performed. On that, the result of color evaluation gives the perfect accuracy. The Figure 5(d) shows the blobs that are detected
in a frame by combining all the results from the different experts.

(a) (b)

All rights reserved by www.ijirst.org 209


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

(c) (d)
Fig. 5(a): Blobs after color evaluation (b) Blobs after movement evaluation (c) Blobs after shape evaluation (d). Combined blobs after multi-
expert evaluation.

After getting the final blob, the portions which are having blobs in the frame indicates that there is a presence of fire on that
area. Those portions are boxed for better visibility and indicate that there is presence of fire. So that, we can undergoes some
security measures. Combined output after the fire detection is shown in Fig. 6(a).
Similar to fire detection, after getting the final blob, the portions which is having blobs in the frame according to smoke
features indicates that there is a presence of smoke on that area. Those portions are shown in the Fig.6 (b). In both the conditions
such as, smoke and fire or only smoke case also, we can undergoes some security measures.
Till Fig.6 it shows the different stages of detection of fire and smoke in a single video. Now, in Fig.7 it shows a frame of the
video in which there is only fire.

(a) (b)
Fig. 6(a): Combined output of fire detection (b): Combined output of smoke detection

Fig. 7: Combined output when there is no smoke to detect

Experimental Analysis
The analysis is done different videos that got from Mivia dataset, stock footage dataset and fire and non-fire videos that are
collected are also used. The videos which includes, smoke only videos, fire videos and also fire and smoke videos.
Among the three experts considered, (CE, ME, and SV), the best one is the CE, which achieves on the considered dataset a
very promising performance (accuracy = 83.87% and false positives = 29.41%). In some other systems, the number of false
positives is about 31%. On the other hand, we can also note that the expert ME, introduced for the first time in this paper for

All rights reserved by www.ijirst.org 210


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

identifying the disordered movement of fire, reveals to be very effective. We obtain a 71.43% accuracy and 53.33% false
positives.
The PSNR block computes the peak signal-to-noise ratio, in decibels, between two images. This ratio is often used as a quality
measurement between the original and a compressed image. The higher the PSNR, the better the quality of the compressed, or
reconstructed image. The Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR) are the two error metrics used to
compare image compression quality. The MSE represents the cumulative squared error between the compressed and the original
image, whereas PSNR represents a measure of the peak error. The lower the value of MSE, the lower the error.
To compute the PSNR, the block first calculates the mean-squared error using the following equation:
I 1 ( m , n ) I 2 ( m , n )
2

M ,N
MSE
M *N
In the above equation, M and N are the number of rows and columns in the input images, respectively. Then the block
computes the PSNR using the following equation:
R2
PSNR 10 log 10

MSE
The PSNR value for a number of videos that we have tested are calculated and shown in table given below.
Table 1
PSNR Value Computation

The time taken for the execution of each video is different in different cases. Such as, it depends on frame size, number of
frames and frames on each second. A description about the execution of each video is given below in table 2.
Table 2
Description of videos and its execution details

All rights reserved by www.ijirst.org 211


Real Time Fire and Smoke Detection using Multi-Expert System for Video-Surveillance Applications
(IJIRST/ Volume 3 / Issue 04/ 036)

V. CONCLUSION

Fire smoke detection systems are primarily designed to warn occupants of a fire so that they may safely evacuate the premises.
Well maintained and correctly operating systems are effective and can reduce a number loss due to this fire. The proposed
method detects fires by analyzing videos acquired by surveillance cameras. The system uses complementary information, based
on color, shape variation, and motion analysis, is combined by a multi-expert system. The performance and efficiency of this
method is high, due to it takes the combination of different aspects of fire such as, color, shape and movement and uses a multi-
expert system to take the decision. This plays an important role in industrial areas, and in large buildings, where the fire can't be
easily detected. Instead of fire, it will detect the smoke, then also we can take the required prevention. So the combined real time
automatic detection of fire and smoke by analyzing the surveillance videos plays an important role in the areas of security.

REFERENCES
[1] T. Celik, H. Demirel, H. Ozkaramanli, and M. Uyguroglu, Fire detection using statistical color model in video sequences", J. Vis. Commun. Image
Represent, vol. 18, no. 2, pp. 176185, Apr. 2007.
[2] Y.-H. Kim, A. Kim, and H.-Y. Jeong, RGB color model based the fire detection algorithm in video sequences on wireless sensor network", J. Distrib.
Sensor Netw., vol. 2014, Apr. 2014, Art. ID 923609.
[3] T. elik and H. Demirel, Fire detection in video sequences using a generic color model", Fire Safety J., vol. 44, no. 2, pp. 147158, Feb. 2009.
[4] T. elik, H. Ozkaramanli, and H. Demirel, Fire pixel classification using fuzzy logic and statistical color model", in Proc. IEEE Int. Conf. Acoust., Speech
Signal Process. (ICASSP), vol. 1. Apr. 2007, pp. I-1205I-1208.
[5] Y. Habiboglu, O. Gnay, and A. E. etin, Covariance matrix-based fire and flame detection method in video", Mach. Vis. Appl., vol. 23, no. 6, pp.
11031113, Nov. 2012.
[6] B. C. Ko, K.-H. Cheong, and J.-Y. Nam, \Fire detection based on vision sensor and support vector machines", Fire Safety J., vol. 44, no. 3, pp. 322329,
Apr. 2009.
[7] C. Yu, Z. Mei, and X. Zhang, A real-time video fire flame and smoke detection algorithm", Proc. Eng., vol. 62, pp. 891898, 2013.
[8] X. Qi and J. Ebert, A computer vision based method for fire detection in color videos", Int. J. Imag., vol. 2, no. S09, pp. 2234, 2009.
[9] A. Rahman and M. Murshed, Detection of multiple dynamic textures using feature space mapping", IEEE Trans. Circuits Syst. Video Technol., vol. 19,
no. 5, pp. 766771, May 2009.
[10] B. U. Treyin, Y. Dedeoglu, U. Gdkbay, and A. E. etin, Computer vision based method for real-time fire and flame detection", Pattern Recognit. Lett., vol.
27, no. 1, pp. 4958, Jan. 2006.
[11] Pasquale Foggia, Alessia Saggese, and Mario Vento, Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts
Based on Color, Shape, and Motion", IEEE transactions on circuits and systems for video technology, vol. 25, No. 9, September 2015
[12] Ashish A. Narwade, Vrishali A. Chakkarwar Smoke detection in video for early warning using static and dynamic features, IJRET: International Journal
of Research in Engineering and Technology, Volume: 02 Issue: 11, November 2013.

All rights reserved by www.ijirst.org 212

Potrebbero piacerti anche