A list,

Abstract: In order to reduce the suspected flame area as much as possible and improve the accuracy and real-time of fire detection, the problem of applying image moving target detection to fire detection is proposed. Firstly, the background subtraction method is used to extract the moving target, and then the area threshold based on the connected area is used to accurately extract the suspected area and its contour. Then, according to the visual features of the early fire, four feature quantities are extracted, namely: The average growth rate of red specific gravity, the rate of area change, the average similarity and roundness of the shape of adjacent frames, and then the SVM was used to integrate these features for comprehensive discrimination. The experimental results show that the above method has high computational speed, good detection effect, low misjudgment rate and good anti-interference ability, which provides a basis for image target optimization control.

1 the introduction In order to prevent fire, people looked at a lot of fire detection technology, at present, in some places has found a more mature method of fire detection, such as temperature, smoke detectors, but in large space buildings and outdoor temperature places such as the detector is difficult to play their role, because these detectors only when temperature or smoke concentration reaches a certain degree, Therefore, the alarm must have a certain delay, which is not conducive to the early detection of the fire. However, the image fire detection technology can make up for the shortcomings of these methods, and can capture the fire information in the first time, and then carry out rapid detection. In the early stage, the graphical fire detection technology mainly focused on the identification of flame color characteristics, which could not distinguish the real flame from objects with similar flame color. Recent studies have uncovered more features of the flame. Jin Huabiao et al. “identified the occurrence of fire according to the change of area and similarity when the flame spread. Zhang Zhengrong et al. used the number of flame sharp angles and shape similarity to judge whether a fire occurred. Torey in et al. used time-domain wavelet transform and spatial-domain wavelet transform to analyze flares and color changes inside flames. Wu Zheng et al. “Use three-state Markov model to describe the space-time characteristics of flame and non-flame pixels, and distinguish flames and moving objects with similar flame color through transitions between different states. These methods can normally detect flames, but are prone to misjudgment. The traditional image fire detection method only considers part of the characteristics of the flame, and does not consider these characteristics comprehensively from the perspective of the whole, so it is easy to miss and misdetect, and the detection speed is slow. In view of this, this paper, by using of the early fire flame shape and area of the changing features of moving target detection method is used, the flame of the suspected area is extracted, and then on the basis of color distribution of the flame, the flame area increased gradually, the edge feature extraction, such as changing the flame characteristic value, using the SVM discriminant synthetically.

2 extraction of fire suspected area and contourThe extraction of fire suspected area and contour is the basis of flame feature extraction and recognition. The extraction of flame target has an important influence on improving the recognition accuracy and detection speed of the system. In the traditional fire detection methods, the extraction of flame target area is mostly based on gray threshold segmentation. Although these methods can effectively extract the target, they also bring a lot of noise, and increase the computational workload for the later elimination of these noises, and also increase the risk of misjudgment. In view of this, this paper uses background subtraction method to preliminarily locate the suspicious area in the fire image according to the characteristic that the flame starts from nothing and moves in the early stage of the fire. Then, the area threshold method based on the connected area is used to accurately extract the suspected area and its contour in the fire image. This method is not only fast, but also fast. And it can effectively filter out some noise and static objects with flame color characteristics.2.1 Location of suspected fire areaAfter the fire, with the increase of the fire, the flame continues to expand, and the flame area shows a trend of continuous expansion in the image. The background subtraction method can be used to quickly segment the flame suspicious area from the fire image. Let I, (x, y) be the current frame image, B, (x, y) be the background image, and A/, (x, y) be the difference image, then the background subtraction formula is as follows:In the above formula, T is the adaptive threshold of · species, and the threshold T is the threshold of the maximum dimension entropy of gray image, so that the target can be separated from its surrounding background points as far as possible, and interference areas can be eliminated to the maximum extent.. If the current image and reference image gray scale biggest difference is less than the threshold T, think no flame l standard, because “before the image is likely to be due to the influence of the environment to make small changes have taken place in the background image, such as small changes in the weather, should regard them as the source through background subtraction with threshold value division of threshold motion image is M, There may be a lot of noise, holes and other non-flame moving objects in (x, Y), which will affect the detection result and detection performance. In order to reduce noise and improve detection efficiency, these areas should be excluded as far as possible. The value images M, (x, y) are processed by median filtering and swelling and corrosion operation in mathematical morphology, which not only removes some details in the image, fills the holes and short connection laps, but also reduces the number of regions to be detected. Figure 1 shows the suspicious flame region obtained by background subtraction and traditional threshold segmentation respectively. 2.2 Extraction of suspected area and contourIn order to facilitate the extraction of flame features, it is necessary to extract the target area and contour accurately. Traditional contour extraction methods mainly use edge detection operators, such as Rob et Cross, Prewitt and Sobel operators. These methods have simple calculation and fast speed, but are sensitive to noise and have poor anti-interference performance. The extracted contour has different thickness and fracture phenomenon, which requires a lot of time to repair and refine. In order to extract the target contour effectively and eliminate the interference areas to the greatest extent, this paper uses a target contour extraction algorithm based on the connected area threshold. The specific method is as follows: Binary image M, (x, y) is scanned from left to right and from top to bottom. Every pixel p(x, y) with gray value of 255 is scanned, it is processed as follows: 1) To investigate whether P (x-1, y) belongs to A connected region A, and if P (x-1, y) : Otherwise, add P (x, y) to A, and add 1 to the area where A, is located. Then check whether pixel p(x+1, y-1) is in another connected region A, where, if p(x+1, y-1)eA,, then the area after combining A, and A, is A; And the area of A, otherwise proceed to scan the next pixel. 2) If p(x-1, y)EA,, examine whether P (x-1, y-L) is in another connected region A, if P (x-1, y-1)EA,, then turn to 3), otherwise add P (x, y) to A, and add 1 to the area of A, and then examine whether P (x+1, Y +1) is in another connected region A, if P (x+1, y-1)e4,, then merge A, and A, the area of the combined area is the sum of the area of A and A, otherwise, continue to scan the next pixel. 3) If p(x-1, y)dA, and p(x-1, y-1)A, then whether P (x, y-1) is in another connected region A, where, if P (x, y-L)eA. , add p(x, y) to A. If p(x+1, y+1) eAj, then add p(x, y) to 4, and add 1 to the area of A, otherwise create A new area 4, and add 1 to the area of p(x, y). Y) add A,., and add 1 to the area of A,,,. After the above processing, the next connected area may be obtained. At this time, appropriate area value (selected from the statistical area value) can be used as the threshold to filter the image, and finally the connected area with the threshold value will be retained by the total amount. Then the contour of the connected area can be obtained by hollowing out. This method can not only remove noise, but also obtain disjoint contour of · pixel width. Figure 2 is the contour rendering of Robe TS Cross and the method in this paper. 3. Extraction of flame image featuresAccording to the characteristic information of the flame in the early stage of the fire, the average growth rate of the red specific gravity of the flame, the growth rate of the flame area, the average similarity and roundness of the flame shape are selected as the basis for flame recognition.

3.1 Extraction of flame color featuresFlame color is an important characteristic of flame. Through the analysis of a large number of flame images, it is found that the color of flame image has a certain rule: in RGB color space, R channel >G channel >B channel; The color of flame moves from white to red from the core to the outer flame. The color of a flame is easily affected by light, and it shows different colors in different environments. Tur Gay and other S calculate the average value of R component of RGB space and then detect and judge pixel by pixel. Although this algorithm is simple in calculation, it is not strong enough and is vulnerable to the influence of light and environment. In order to reduce the sensitivity to color change, the growth rate of red specific gravity is used to describe the color distribution of flame. The red specific gravity of region K in frame I is calculated as follows:Where R(p, (x, y)), G(p, (x, y)) and B(p, (x, y)) are the values of R, G and B of region A and pixel point P, (x, y) respectively. The growth rate of red specific gravity of the two adjacent frames is:In order to improve the accuracy of calculation, the average growth rate of red specific gravity R between 5 consecutive frames is taken as the criterion. 3.2 Extraction of flame area featuresAfter the fire, the area of the fire-stricken area presents a continuous and expansive growth trend, which is shown in the image as the area expansion of the target area. The extent of area expansion can be determined by the area growth rate AAg(k=1, 2… M) to describe. Set! The area of region k at moment is S, and the area of region K at moment.4. Y is S, JA. In video images, time is usually divided by the number of frames, so it can be considered that S is the area of target region A of frame I and S,.j is the area of target region Aj of frame (I +j). Then the area growth rate can be calculated as follows:In the experiment, j is set to 5 and the area growth rate AA can eliminate the interference of some fixed light sources and fast moving objects and improve the recognition rate of the system.

3.3 the flame shape feature extraction of the shape of the flame from the single frame image observation is irregular, but from the sequence image observation, especially short intervals of frames in a row, the shape of the flame has a certain similarity, and within a certain range of changes, and the fast moving light sources and other moving objects of the flame color have obvious difference, Therefore, it can be used as a basis for flame discrimination. The specific calculation method is as follows: suppose the video image sequence is f(x, y),, I = 1,2… N, I is the number of frames in the image, and (x, y) is the position of each pixel in the image. A, is suspected region, k= 1,2… M and k are the suspicious area labels of the image, then the similarity calculation formula of adjacent frames is as follows: In order to reduce the computational complexity and improve the accuracy of calculation, the average value of similarity between 5 consecutive frames is taken here, 6! 4 as the criterion.

3.4 Extraction of flame shape featuresDue to the influence of external factors and the phenomenon of fire plume, the edge of early fire flame is constantly changing and irregular, while the edge contour of other incandescent lamp, candle and neon lamp on road surface with similar color characteristics is relatively regular. Therefore, the edge contour shape can be used as the basis of fire flame identification. In computer vision system, circularity is usually used to represent the complexity of object’s edge contour. Its calculation formula is as follows:In the above formula, A represents the current area of region K of frame I, which is the sum of all pixels in region K of frame I, and C is the perimeter of the contour of region K of frame I.

4 Based on SVM fire detectionSVM is a new machine learning algorithm proposed by Cortes and V Panik on the basis of the VC theory of statistical learning theory and the principle of minimum structural risk. It strives to seek optimal classification results under the condition of limited sample information. Its basic idea is to construct an optimal classification function of two kinds of problems, so that the two kinds of problems can be separated as far as possible without mistakes. And maximize the interval between the two classes. Let (x, x), (x20)2),… (x, x,) as the training sample, x, eR “for n the training sample, xe1-1, 1 | as samples I category information, the SVM is to find the optimal classification plane: wT (x) + b = 0, divide two kinds of samples as soon as possible, and the distance between the two classes (2 /] w]) is the largest. Therefore, the problem of finding the optimal classification surface is transformed into the following optimization problem:The constraintThis is a quadratic convex programming problem, which is usually solved by transforming it into a dual problem:Solve the value of α, and then calculate the weight vector W and threshold b in the optimal classification surface, then the optimal classification function can be obtained:For the nonlinear case, the kernel binary number is used to map the original sample to a high-dimensional space to transform it into a linear separable problem in a high-dimensional space, and the optimal classification function is:Type of K (x; *x,) is the kernel function. Common kernel functions are: polynomial kernel function, radial basis kernel function and Sigmoid kernel function. Fire is a kind of out of control of combustion, the combustion process is a typical nonlinear process, the state change under the influence of combustible materials and the surrounding environment and limitation, has strong randomness, using the SVM for fire detection, which can overcome the limitations of artificial large threshold, improve the real-time and accuracy of fire recognition. The specific methods are as follows: ① Using background subtraction method and area threshold method based on connected region to extract flame suspected area from image. ② Four characteristic parameters of flame were calculated: the average growth rate of red specific gravity, the area growth rate of adjacent frames, the average similarity of the circular degree of edge magic and the shape of adjacent frames to construct the training sample data. ③ Training SVM classifier. ④ SVM classifier was used for detection. The specific process is shown in Figure 3.

Two, some source code

Clear all close all CLC % Can be through a video frame sequence, select two adjacent frames subtraction to obtain the changing area, through the following image filtering, histogram equalization, binarization, expansion, use Canny to complete the current flame contour extraction Img = imread the original image Img = imread('1.png');
% Img2 = imread('2.png'); % Img_b=Img-Img2; [M,N,K] = size(Img); gray_R=Img(:,:,1);
gray_G=Img(:,:,2);
gray_B=Img(:,:,3);
% if numel(M)>2
%     gray = rgb2gray(Img);
% else% gray = Img; % end % create filter W = fSpecial'gaussian'[5.5].1); 
G = imfilter(gray_G, W, 'replicate');
figure;
subplot(121); imshow(gray_G); title('Original image');
subplot(122); imshow(G);    title('Filtered image'); Histogram equalization [J,L]= Histeq (G); figure; subplot(121); imhist(G); subplot(122); imhist(J); bw = im2bw( G,graythresh(G)); se1=strel('disk'.5); % is to create a radius of5Ifilt=imdilate(bw,se1); figure,imshow(Ifilt); Ifilt2=bwareaopen(Ifilt,150); % first complete the corresponding binarization, then use the connected domain to do processing % boundary detection'canny');  
figure(3) 
imshow(contour); title('border')  
contour=double(contour);
t=1;
for i=1:M
    for j=1:N
        if(contour(i,j)==1)
            image_points_x(t,1)=i; % image_points_y(t,1)=j;
            t=t+1; end end end figure,imshow(G); hold on; scatter(image_points_y,image_points_x,'r'.'+') %% % feature extraction (flame color, flame area, flame average acquaintance and roundness) % flame color (R>G>B), using the flame red specific gravity growth rate to describe the flame color distribution law Ifilt2=double(Ifilt2);
t1=1;
for i=1:M
    for j=1:N
        if(Ifilt2(i,j)==1)
            image_ROI_x(t1,1)=i; % Points in the entire region, can be considered as the flame area (is the number of pixels contained in the extracted contour) image_ROI_y(t1,1)=j;
            t1=t1+1;
        end
    end
end

num_point=size(image_ROI_x,1);
R_S=0;
G_S=0;
B_S=0;

for i=1:num_point
    R_S=R_S+double(gray_R(image_ROI_y(i,1),image_ROI_x(i,1))); % count all red image points G_S=G_S+double(gray_G(image_ROI_y(i,1),image_ROI_x(i,1)));
    B_S=B_S+double(gray_B(image_ROI_y(i,1),image_ROI_x(i,1))); end Img_S=R_S+G_S+B_S; r_Aver=R_S/Img_S; Cir =size(image_points_x, image_points_x, image_points_x,1); Area = % circumference num_point; doc=(4*pi*num_point)/(cir*cir); % roundness features=[r_Aver,num_point,doc]; %features=[r_Aver,num_point,simil,doc]; save('features'.'features');
load('features.m'); % Extracted feature TTT =1;
for ii=1:size(path1,1)2 -
    if(ii<=(size(path1,1)2 -) /2)
        labels(ii,1)=ttt;
    else
        labels(ii,1)=ttt2 -;
    end
end

desc_new=[];
for i=1:size(path1,1)2 -
    desc_new=[desc_new;features{i}']. end %% %SVM dataset=features; lableset=labels; Train_set = [dataset(1:15,:);dataset(26:40,:)]; train_set = [dataset(1:15,:); Train_set_labels = [Lableset (1:15); Lableset (26:40)]; train_set_labels = [Lableset (1:15); % test_set = [dataset(16:25,:);dataset(41:50,:)]; % test_set = [dataset(16:25,:); Test_set_labels = [lableset(16:25,:);lableset(41:50,:)]; % data preprocessing, normalized training set and test set to [0,1] interval test_dataset = [train_set;test_set]; % mapminmax [dataset_scale,ps] = mapminmax(test_dataset').0.1);
dataset_scale = dataset_scale';
 
train_set = dataset_scale(1:mtrain,:);
test_set = dataset_scale( (mtrain+1):(mtrain+mtest),: ); %% SVM network training %% Result analysis % Actual and predicted classification of test set % It can be seen from the figure that only one test sample is the figure that is misclassified; hold on; plot(test_set_labels,'o');
plot(predict_label,'r*');
xlabel('Test set sample'.'FontSize'.12);
ylabel('Category label'.'FontSize'.12);
legend('Actual Test Set Classification'.'Predictive Test Set Classification');
title('Actual and predicted classification of test sets'.'FontSize'.12);
grid on;



Copy the code

3. Operation results

Matlab version and references

1 matlab version 2014A

2 Reference [1] CAI Limei. MATLAB Image Processing — Theory, Algorithm and Case Analysis [M]. Tsinghua University Press, 2020. [2] Yang Dan, ZHAO Haibin, LONG Zhe. Examples of MATLAB Image Processing In detail [M]. Tsinghua University Press, 2013. [3] Zhou Pin. MATLAB Image Processing and Graphical User Interface Design [M]. Tsinghua University Press, 2013. [4] LIU Chenglong. Proficient in MATLAB Image Processing [M]. Tsinghua University Press, 2015. [5] WANG Wenhao. Fire Detection based on Connected Region and SVM Feature Fusion [J].