1. Trang chủ
  2. » Luận Văn - Báo Cáo

(Luận văn thạc sĩ) một số phương pháp trích chọn đặc trưng và phát hiện đám cháy qua dữ liệu ảnh

27 6 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 27
Dung lượng 533,77 KB

Nội dung

MINISTRY OF NATIONAL DEFENCE MILITARY TECHNICAL ACADEMY HA DAI DUONG APPROACHES TO VISUAL FEATURE EXTRACTION AND FIRE DETECTION BASED ON DIGITAL IMAGES h Majored: Mathematical foundations for Informatics Code: 62 46 01 10 ABSTRACT OF PHD THESIS OF MATHEMATICS HA NOI - 2014 THIS THESIS IS COMPLETED AT MILITARY TECHNICAL ACADEMY - MINISTRY OF NATIONAL DEFENCE Scientific Supervisor: Assoc Prof Dr Dao Thanh Tinh Reviewer 1: Assoc Prof Dr Nguyen Duc Nghia Reviewer 2: Assoc Prof Dr Dang Van Duc h Reviewer 3: Assoc Prof Dr Nguyen Xuan Hoai The thesis was evaluated by the examination board of the academy by the decision number / ., ./ / of the Rector of Military Technical Academy, meeting at Military Technical Academy on … /… /……… This thesis can be found at: - Library of Le Quy Don Technical University - National Library of Vietnam h ABSTRACT Automatic fire detection has been interested for a long time because fire causes large scale damage to humans and our properties Until now, some kinds of automatic detection devices, such as smoke detectors, flame or radiation detectors, or gas detectors, were invented Although these traditional fire detection devices have proven its usefulness, they have some limitations; they are generally limited to indoors and require a close proximity to the fire; most of them can not provide additional information about fire circumstance Recently, a new approach to automatic fire detection based on computer vision has lager attractive from researchers; it offers some advantages over the traditional detectors and can be used as complement for existing systems This technique can detect the fire from a distance in large open spaces, and give more useful information about fire circumstance such as size, location, growth rate of fire, and in particularly it is potential to alarm early This research concentrated on early fire detection based on computer vision Firstly, some techniques that have been used for in the literature of automatic fire detection are reviewed Secondly, some of visual features of fire region for early fire detection are examined in detail, which include a model of fire-color pixel, a model of temporal change detection, a model of textural analysis and a model of flickering verification; and a novel model of spatial structure of fire region Finally, three models of fire detection based on computer vision at the early state of fire are presented: a model of early fire detection in general use-case (EVFD), a model of early fire detection in weak-light environment (EVFD_WLE), and a model of early fire detection in general use-case using SVM (EVFD_SVM) CHAPTER INTRODUCTION 1.1 Automated fire detection problems Automatic fire detection has been interested for a long time because fire causes large scale damage to humans and our properties Heat or thermal detectors are the oldest type of automatic detection device, originating from the mid-19th century with several types still in production today Since then, other kinds of automatic detection devices, such as smoke detectors, flame or radiation detectors, or gas detectors, were invented Although these traditional fire detection h devices have proven its usefulness, they have some limitations Despite the advances in traditional fire alarm technology over the last century, losses caused by fire, such as deaths, permanent injuries, properties and environment damages still increase In order to decrease this, timely detection, early fire localization and detection of fire propagation are essential The problem of fire detection based on computer vision was initialized in early 1990s by Healey G et al., since then various approach to this issue have been proposed However, vision-based fire detection is not a completely solved problem as in most computer vision problems The visual features of flames and smoke of an uncontrolled fire depend on distance, illumination and burning materials In addition, cameras are not color and/or spectral measurement devices, they have sensors with different algorithms for color and illumination balancing, and therefore they may produce different images and video for the same scene So that most proposed methods in vision-based fire detection return good results in some conditions of use-case, and may give bad results in other conditions In particularly, existing vision-based fire detection methods are not adequate attention to alarm early 1.2 Research objective For all above reasons the author have studied the topic “Approaches to visual feature extraction and fire detection based on digital images” with the main interest in the problem of vision-based fire detection at the early state of fire Main question and also be motivation for this research is can vision-based fire detection give a fire alarm as soon as possible at the early state of fire? This thesis wants to find out the answers for that question in some different usecase such as general conditions, weak-light environment, camera is dynamic The objectives of this research include the following issues: 1) Firstly, some techniques that have been used for fire detection based on computer vision are reviewed 2) Secondly, some of visual features of fire region such as color, texture, temporal change, flicker and spatial structure are examined in detail so that reducing the computational complexity of algorithm 3) Thirdly, some models of early fire detection based on computer vision are developed The development of each model relies on the analysis of the use-case such as for buildings and office surveillance, for h warehouse with weak light environment, etc It is also applying intelligent classification to make the models more suitable and accurate 1.3 Contributions This thesis makes the following contributions: Develop and propose some methods of visual features of fire region extraction Develop four new methods of pixel or fire region segmentation, these include a method of fire-color pixel based on Bayes classification in RGB space, a method of temporal change detection, a method of textural analysis and a method of flickering verification; and propose a novel approach to spatial structure of fire region by using top and rings features Propose a model of vision-based fire detection for early fire detection in general use-case - EVFD This model is a combination of temporal change analysis, pixel classification based on fire-color process, and the flickering verification Propose a model of vision-based fire detection for early fire detection in weak-light environment - EVFD_WLE This proposal is a combination of pixel classification based on fire-color process and analysis of spatial structure of fire region; these processes will be done if the environmental light is weak Propose a model of vision-based fire detection for early fire detection in general use-case using SVM - EVFD_SVM In this model, the algorithm consists of three main tasks: pixel-based processing using fire-color process for pixel classification, temporal change detection, and recover lack pixel; textural features of potential fire region extraction; and SVM classification for distinguishing a potential fire region as fire or non-fire object 1.4 Thesis outline This thesis is organized as follows: Chapter 1, Introduction, presents the need of problem of fire detection based on computer vision, disadvantages of traditional fire detection systems, and advantages of fire detection based on computer vision This chapter also describes problem of research, research question, main contributions and structure of the thesis Chapter 2, Fire detection techniques based on computer vision: A review, review some techniques that have been used for fire detection based on computer vision Chapter 3, Visual feature extraction for fire detection, presents examining in detail some of visual features of fire region for early fire detection; and then develops four new models of pixel or fire region segmentation and proposes a novel model of spatial structure of fire region Chapter 4, Early fire detection based on computer vision, presents three models of fire detection based on computer vision: early fire detection in general use-case, early fire detection in weak-light environment, and early fire detection in general use-case using SVM Chapter 5, Conclusions and Discussions, states the conclusions, presents the contributions and summarizes the results obtained throughout the thesis and recommendations future research of problem h CHAPTER FIRE DETECTION BASED ON COMPUTER VISION: A REVIEW 2.1 Introduction Automatic fire detection has been interested for a long time due to its large scale damage to humans and our properties Heat or thermal detectors are the oldest type of automatic detection device originating from the mid-19th century Since then, other kind of automatic detection devices; smoke detectors, flame or radiation detectors, or gas detectors for examples have been being developed Although these devices have proven its usefulness in some conditions, they have some limitations They are generally limited to indoors and require a close proximity to the fire Most of them can not provide additional information about fire circumstances and may take a long time to raise alarm Fire detection based on computer vision can be marked by the research of Healey G et al in the early 1990s Since then, various approaches to this issue were proposed The general scheme of fire detection based on computer vision is a combination of two components: the analysis of visual features and the classification techniques The visual features include color, temporal changes, spatial variance, texture and flickering The classification techniques are used to classify a pixel as fire or as non-fire, or to distinguish a potential fire region as fire or as non-fire object; these techniques include Gaussian Mixture Model (GMM), Bayes classification, Support Vector Machine (SVM), Markov Model and Neural Network, etc 2.2 Visual features analysis 2.2.1 The chromatic color Color detection is one of the most important and earlier feature used in vision-based fire detection The majority of the color-based approaches in this trend make use of RGB color space, sometimes in combination with HSI/HSV color space Some fire-color models often use in the literature of vision-based fire detection such as statistical generated color models, Gaussian Mixture Models (GMM) Based on the analysis of color of flame in red-yellow rang, a common type of flame in the real-word, a fire-color model to segment a pixel is proposed as follows: RC1 : R   R , RC2 : R  G and G  B , RC3 : S   (255  R ) * S  R  , and the fire-color model is defined as: 1 if ( R1 ) and ( R2 ) and ( R3 ) FireC ( x, y )   0 Otherwise h where R, G, B are red, green and blue components of pixel at (x,y) respectively; S is the saturation component in HSI color space; ST and RT are two experimental factors Several other works detect firecolor pixel using more complex model such as Gaussian mixture model In this model, with a given pixel, if its color value is inside one of distribution then it is considered as a fire-color pixel Denote d(r1, g1, b1, r2, g2, b2) is the measurement distance from (r1, g1, b1) to (r2, g2, b2) in 3-dimensional RGB space The fire-color model based on GMM is described as 1 if {i : d ( R, G, B, Ri , Gi , Bi )  2vi , i  10}   FireTr ( x, y)   0 Otherwise in which Ri, Gi, B i are the mean of red, green blue components of Gaussian distribution i-th; vi is its standard deviation 2.2.2 The temporal changes Color model alone is not enough to identify fire pixel or fire region There are many objects that share the same color as fire An important visual feature to distinguish between fire and fire-like objects is the temporal change of fire To analyze temporal changes, it may cause by flame, almost proposals assume that the camera is stationary A simple approach to estimate the background is to average the observed image frames of the video Let I(x,y,n) represent the intensity value of the pixel at location (x,y) in the n-th frame, I, background intensity value, B(x,y,n+1) at the same pixel position is calculated as follows: aB( x, y, n)  (1  a ) I ( x, y, n) if (x,y ) is stationary B( x, y, n  1)   if (x,y ) is moving  B ( x, y , n ) h where B(x,y,n) is the previous estimate of the background intensity value at the same pixel position The update parameter a is a positive real number close to one Initially, B(x,y,0) is set to the first frame, I(x,y,0) The pixel at (x,y) is assumed to be moving if I ( x, y, n)  I ( x, y, n  1)  T ( x, y, n) where I(x,y,n-1) is the intensity value of the pixel at (x,y) in the (n-1)-th frame, T(x,y,n) is a recursively updated threshold at (x,y) of frame n Other method usually used to analysis temporal changes is frames difference 2.2.3 The textural and spatial difference Flames of an uncontrolled fire have varying colors even within a small area since spatial color difference analysis focuses on this characteristic Using range filters, variance/histogram analysis, or spatial wavelet analysis, the spatial color variations in pixel values is analyzed to distinguish between fire and fire-like object Using wavelet analysis, Toreyin et al compute a value, v, to estimate spatial variations as follows: v M N s lh ( x, 2 y )  shl ( x, y )  shh ( x, y ) x, y where s lh(x,y) is the low-high sub-image, shl(x,y) is the high-low subimage, and shh(x,y) is the high-high sub-image of the wavelet transform, respectively, and MN is the number of pixels potential fire region If the decision parameter, v, exceeds a threshold, then it is likely that this region under investigation is a fire region In other way, Borges et al use a well-known metric, the variance, to indicate the amount of coarseness in the pixel values For a potential fire region, R, the variance of pixels is computed as c   ( x, y )R I ( x, y )  I )2 p( I ( x, y ) in which I(x,y) is intensity of pixel at (x,y), p() is the normalized h histogram, and I is the mean intensity in R Therefore, fire is assumed if the region is with a variance c > λσ, where λσ is determined from a set of experimental analysis 2.3 Classification techniques Some popular approaches to the classification of the multidimensional feature vectors obtained from each candidate flame region are Bayes classification and SVM classification Other classification methods that have been used in the literature of visionbased fire detection include neural networks, Markov models, etc This section introduces two classification methods that used in the research: Bayes and SVM classification 2.4 Conclusion The development of application based on computer vision for fire detection, which can raise alarm quickly and accurately, is essential However, vision-based fire detection is not a completely solved problem as in most computer vision problems The visual features of flames of an uncontrolled fire depend on the distance, illumination and burning materials In addition, cameras are not color and/or spectral measurement devices, they have sensors with different algorithms for color and illumination balancing, and therefore they may produce different images and video for the same scene For the above reasons, the research of vision-based fire detection is necessary In general, most proposed methods in vision-based fire detection returns good results in some conditions of use-case, and may give bad results in other conditions In particularly, current vision-based fire detection methods are not adequate attention to alarm early so that research of vision-based fire detection is necessary, and using this technique for early fire detection is an important issue CHAPTER VISUAL FEATURE EXTRACTION FOR FIRE DETECTION This chapter presents the examining in detail some of visual features of fire region for early fire detection; and then develop four new models of pixel or fire region segmentation, these include a model of fire-color pixel, a model of temporal change detection, a model of textural analysis and a model of flickering verification; and propose a novel model of spatial structure of fire region h 3.1 A new approach to color extraction 3.1.1 Chromatic analysis The model of fire-color is usually used in the first step of the process and is crucial to the final result The general idea of most proposals in the VFD literature is to determine the fire-color model, Fire(x, y) for pixel at (x,y), and then using that model to build the potential fire mask, PFM(x,y), as follows: 1 if Fire( x, y ) PFM ( x, y )   0 Otherwise After that the mask PFM is used to analyze the other characteristics of fire such as temporal changes, deformation of the boundary, surface statistical parameters, etc The main drawback of existing model for fire-color detection is fixed; it returns good results in some situations and raise bad results in some others For more flexible, this study proposes a color model of pixel in fire region using Bayesian classification; rely on the red (R), green (G), and blue (B) components a model of fire-color to classify a pixel into two classes, fire or non-fire pixel is developed 3.1.2 Classification based on Bayes For pixel p at (x,y), a vector v = [R, G, B]T is considered in terms of sample for classification problem; in which R, G, and B are red, green and blue component of p Let g1(v) and g2(v) are two discriminatory functions based on Bayesian classification for fire and non-fire classes of pixel p; if g1(v)>g2(v) then p belong to fire class, otherwise p belong to non-fire class Denote 1 is set of fire class samples, 2 is set of non-fire class samples, Bayessian discriminatory functions are defined as follows: g1 (v)  vT W1v  w1T v  c1 g (v)  vT W2 v  w2T v  c2 in which W1   C11 , W2   C21 , w1  C11m1 , w2  C21m2 , 1 1 c1   m1T C11m1  ln | C1 |  P(1 ) , c2   m2T C21m2  ln | C2 |  P(2 ) 2 2 where m1 is mean and C1 is covariance matrix of 1 and m2 is mean and C2 is covariance matrix of 2 Then fire-color model, denote implemented The evaluation of time performance is shown in Table 1, and the quality of temporal change detection is shown in Figure Figure The scheme of partition of two frames for temporal analysis Method Time performance per frame (Milliseconds) Frames difference 23.7 Background subtraction 38.8 CMCC 24.7 Table The comparison of time performance b) h a) c) d) e) Figure An example results of three temporal change detection techniques Figure The ROC curve of temporal changes detection Figure shows the ROC (Receiver Operating Characteristic) curve of temporal changes detection for threshold T Rely on this evaluation, when the threshold T = 0.025 then true positive fraction equal to 95% and false positive fraction is 6% 11 3.2.2 Textural analysis Intuitively, fire has unique visual signatures such as color and texture The textural features of a fire region includes average values of red, green, blue components, skewness of red component histogram, and surface coarseness are used Denote PFR as potential fire region, the textural features on PFR are computed as follows: x1   ( x , y )PFR R( x, y ) K , green average values of red x2   ( x , y )PFR G ( x, y ) K , and blue x3   ( x , y )PFR B( x, y ) K component Call p’(r) as normalized histogram of red component in PFR, m2 and m3 are the variance and the third moment of p’(r) the skewness of p’(r), x4  m3 m22 , is consider a textural feature Call p(r) as normalized histogram of gray level in PFR, variance, x5   rL01 p(r ) * (r  m)2 and third moment x6   rL01 p(r ) * (r  m)3 of p(r) are two features and, L is the number of gray level in image , the fourth and fifth of textural features are x4=m2 and x5=m3 T For each candidate region, eight features as mentioned above are evaluated to construct vector v as: h T v   x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8  and v is used to indicate candidate region contained fire or not by applying Bayes classifier Let g FR (v ) and g NR (v) are decision functions for fire and non-fire, the textural model for a potential fire region PFR is defined as: 1 if ( g FR (v)  g NR (v)) TextureF ( PFR)   0 Otherwise Figure The number of misclassified pixels in comparison with TextureF 12 In comparison with the model ColorF and Chen, the total number of misclassified of three methods is shown in Figure 3.2.3 Flickering analysis In order to describe the flickering of fire, this work uses the change of width and height of fire region to distinguish from non-fire region Three consecutive frames and their size of fire region are shown in Figure Figure Three consecutive frames and its size of fire regions h Assuming that PFR is potential fire region, a and b are the width and the height of rectangle that contains PFR; r is the ratio between a and b; the flickering of fire region leads to the change of a, b and r To estimate the change of a, b and r between two consecutive frames, firstly the changes of these parameters are computed as:    1 if a1  a2  a0 1 if b1  b2  b0 1 if r1  r2  r0 , bc   , rc   ac      0 otherwise 0 otherwise 0 otherwise where a1, b1 and r1 are computed from potential fire region on previous frame; a2, b2 and r2 are computed from fire region on current frame; and a0, b0 and a0 are three experimental thresholds Finally, the flickering of PFR is defined as: 1 if (ac  bc  rc )  FlickerF ( PFR)   0 otherwise 3.3 A novel approach to spatial structure extraction This section presents a novel model for fire-region verification The spatial structure of fire region is considered in term of rings and top features of fire region 3.3.1 Rings feature of fire region Assuming  is the set of pixels in image I that satisfied:   {( x, y )  I : Fire( x, y)  True} in which, Fire(x,y) is a model of 13 fire-color Using fuzzy clustering technique, Fuzzy C-Mean, to cluster  (in RGB space) into K class, an example with K = are shown in Figure Figure An example of rings feature of a fire region Let (1) , (2) , , ( K ) are K class of  by FCM clustering Consider a pixel pk  I at (x,y), the neighboring of pk with the same column or same row, denote O4 ( pk ) , defined as: O4 ( pk )  {P ( x ', y ')  I :| x ' x |  | y ' y | 1} where P(x’,y’) is a pixel at (x’,y’) in I Let (0)   , the set  has  h spatial ring feature if K partition (1) , (2) , , ( K ) of satisfied: p   (i )  M  O4 ( p ) : M   (i 1)   (i )   (i 1)     (*) i  1, , K  So that ruler to check  or image I has rings feature is defined as: 1 if Expression (*) r ( )   0 otherwise 3.3.2 Top feature of fire region Intuitively, the hot air of fire is less dense than the surrounding, and moving onto the tops tends to form in the middle To capture this characteristic of fire region the top structure of fire is described as follows Firstly, find out two points, A( x, y) and B( z, y) , in  that | x  z | max{| i  k |: M (i, j ) , N (k , j ) } With satisfy: A( x, y ), B( z, y ) are chosen above, a part of  , denote  AB , that lay above of AB is determined as follows:  AB  {P(i, j )  : j  y} Secondly, choose a point C (a, b)  AB such that b  y for  M ( x, y )  AB Then two following parameters can compute as: 14 AB AB AB AB 1 | {P  ABC , P    } | / |  | , 2 | {P   , P   ABC} | / |  | Figure An example of top feature of a fire region The ruler to check  or image I has top feature is defined as: 1 if 1  01 and 2  02 t ( )   0 otherwise h in which,  01 and  02 were chosen by experiment If triangle ABC contains most of the pixels in the upper part of fire region, it is clear that the value of 1 and  are not large Figure describes structure of the flame with the triangle ABC which described above 3.3.3 Experiments In these experiments 563 images are used, these are divided into three categories: A) images containing a single fire region at the early stages of fire, 157 images, B) images containing complex when the fire broke out, 185 images, C) images not contain the fire, but there are some fire-like objects For each group, 20 images are selected randomly and results are shown that separation between Group A and B is quite clear The value of 1 and  in Group A and B is almost small The evaluation of rings feature on tested images is shown in Table Number Ring detected of Image Number % Group A - Image with one fire region 157 155 99 Group B - Image with some fire regions 185 12 Group C - Image without fire region 221 203 92 Table The results of test for rings feature with three image groups Image group 3.4 Summary The visual features of fire region play an important role in vision-based fire detection In this study, five visual features of fire region are examined in detail; four new models of pixel or fire region segmentation are developed, these include a model of fire -color pixel ([9]), a model of temporal change detection ([1], [2]), a model of 15 textural analysis and a model of flickering verification ([1], [2], [4]); and a novel model of spatial structure of fire region ([7]) CHAPTER EARLY FIRE DETECTION BASED ON COMPUTER VISION This chapter presents three models of fire detection based on computer vision: early fire detection in general use-case, early fire detection in weak-light environment, and early fire detection in general use-case using SVM 4.1 Early fire detection in the normal use-case 4.1.1 General use-case This section presents an approach to the problem of early fire detection based on computer vision for using in general use-case conditions with assumptions: camera is static; burning material is popular such as paper, wood, etc.; and circumstance of fire is not too far from camera 4.1.2 The algorithm EVFD The model is a combination of temporal analysis using correlation coefficient, color analysis based on RGB color space, and flickering analysis as shown in Figure h Input: Frames Temporal change detection using CMCC The color detection using ColorF The flicker analysis using FlickerF Output: Fire alarm Figure The scheme of EVFD algorithm The detail of the EVFD algorithm: two consecutive frames, previous frame I, current frame J have size of mn are inputs The output is boolean variance, A, TRUE to notice that J contains fire and FALSE for otherwise The algorithm EVFD  Input: Previous frame I, current frame J, integer d, h  Output: Boolean variable A (TRUE - fire, FALSE - non-fire) Declare and initialize some variables 16 Int a, b; A = FALSE; a = m/h; b = n/d; Computer the change map CM by using CMCC model: a Calculate correlation-coefficient between a region on I and corresponding region on J, and assign to CH(k, q) For k =1 to h For q =1 to d a b   I ([k  1]a  i,[q  1]b  j )  J ([k  1]a  i,[q  1]b  j ) CH (k , q )  i 1 j 1 a b a b 2   I ([k  1]a  i,[q  1]b  j )    J ([k  1]a  i,[q  1]b  j ) i 1 j 1 i 1 j 1 b h Establish the change map based on correlation-coefficient, CM, as follows: For x =1 to m For y =1 to n 1 if CH ( x \ M  1, y \ N  1)  T CM ( x, y)   ; 0 otherwise Establish the potential fire region, PFR, based on the color clue by using ColorF(x,y) model from the change map CM: a Detect potential fire mask PFM: For x =1 to m For y =1 to n 1 if (CM ( x, y)  and ColorF ( x, y)  PFM ( x, y)   0 otherwise b Establish the potential fire region PFR PFR = {(x,y): PFM(x,y) = 1}; Verify the flickering property of PFR using FlickerF(PFR): a If PFR is empty then go to step 6; b Compute the flickering FF = FlickerF(PFR); If (FF = 1) then A = TRUE; Return A; 4.1.3 Experiments To evaluate this proposal, the EVFD algorithm, videos which consists of indoor videos and outdoor videos are used, and the video resolution is 320240 For comparison, the model by T H Chen et al - denote Chen, the model by O Gunay et al - denote Gunay are implemented (only on color detection and motion detection) The results on the tested videos are shown in Table 4, Table 5; and the evaluation of time performance and total number frames false detection shown in Table 17 First frame First frame detected has fire has fire Chen Gunay EVFD Video - Indoor with fire 2 Video - Indoor with fire 3 Video - Indoor with fire 104 2 105 Video - Indoor no fire 12 Video - Outdoor with fire 2 Video - Outdoor with fire Video - Outdoor with fire 2 Video - Outdoor no fire 2 Table The first frame detected has fire in comparison with EVFD Number of Number of frames detected Video frames has fire have fire Chen Gunay EVFD Video - Indoor with fire 150 149 142 150 Video - Indoor with fire 150 142 101 149 Video - Indoor with fire 23 88 149 Video - Indoor no fire 98 0 Video - Outdoor with fire 150 149 74 150 Video - Outdoor with fire 150 149 100 150 Video - Outdoor with fire 27 89 137 Video - Outdoor no fire 150 149 49 Table The number of frames detected has fire in comparison with EVFD Time performance per frame Total number frames Method (Milliseconds) false detection Chen 23.4 357 Gunay 39.3 487 EVFD 20.0 273 Table The evaluation of time and total number of false detected frames Video h 4.2 Early fire detection in weak-light environment In this section, the problem of fire detection based on computer vision in weak-light environment (WLE) is considered In this condition, the flame is small and brighter than the background, fire region has a high contrast to its surrounding and it exhibits a structure of nested rings of colors 4.2.1 The weak-light environment Assuming p(r) is a normalized gray-level histogram of image I, p ( r )  nr n where r[0,…, L-1], and L is the number of gray level in images, nr is the total number of pixels with gray level r and n is the 18

Ngày đăng: 01/12/2023, 14:43

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w