Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 29 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
29
Dung lượng
1,01 MB
Nội dung
MINISTRYOFEDUCATIONANDTRAINING DEPARTMENTOFDEFENSEACADEMYOFMILITARYSCIENCEANDTECH NOLOGYMILITARY NguyenVanHung RESEARCH ON IMAGE PROCESSING METHODS OF DETECTING ANDTRACKINGSEVERALMILITARYTARGETSTOEMPLOYFORCONTROLLINGA UTONOMOUSWEAPONSYSTEMS Major:M e c h a n i c a l Engineering Code: 62 460110 SUMMARIZESM A T H E M A T I C S D O C T O R A L T H E SIS HaNoi-2017 Theworkwascompletedat: ACADEMYOFMILITARYSCIENCEANDTECHNOLOGYMINISTRYOFDEFENSE PhDsupervisors: Assoc.Prof.,Dr.XuatVanNguyen Dr.ThanhChiNguyen Reviewer1:Assoc.Prof., DrofSe.LinhTranHoai Reviewer 2:Assoc.Prof., Dr.Ha LeThanh Reviewer3:Assoc.Prof.,Dr.DungPhamTrung ThedissertationwillbedefendedbeforetheBoardmeetingdotlevelthesisattheAcademyof MilitaryScienceAndTechnology-MinistryofDefense Atd a y m o n t h year2017 Canlearndissertationatthelibrary: - LibraryofAcademyofMilitaryScienceAndTechnology-MinistryofDefense - NationalLibraryofVietnam OPENING Monitoring systems for automatical target detection are an important part of manyhightechnologyweaponsystems.Theyincreasetheeffectivenessofweaponsystem s,anddecreasetheactionswhichareussuallymanipulatedbyhands,especiallyinhardcond itionsofenvironment.Researchanddevelopmentofmonitoringsystemsforautomaticalta rgetdetectionplayanimportantroleinupgradingtheoldweaponsystemsandprovidegrou ndsfordevelopingnewgeneration weapon systems The dissertation of “Research on image processingethods ofdetecting and tracking several military targets to employ for controllingautonomous weapon systems” is usedto fittheneedsof ourarmy The main purpose of this dissertation is developing an automatical system forautomatic detection and tracking military targets (tanks, vehicles…) using the imageprocessingtechnologyandrecognizingalgorithmstocontrolweaponsystems The main object of this research is monitoring systems of different weaponsystems, the targets of which are ground military target like tanks and other motorvehicles Theresearchareaofthisdissertationfocusesonresolvingthefollowingquestions: -What are the targets of particular types of weapons? What special characteristicsdotheyhave in comparisionwith other backgroundobjects? -How to automatically detect and recognize target in the images under differentconditions of image collecting? -Howtocorrectlytrackthedetectedtargetinrealtime? Scientificsignificancesofthedissertation: - Proposeasolutiontodevelopamonitoringsystemforautomaticaltargetdetection,usi ngimageprocessingtechnologyandintelligentrecognitionalgorithms - Proposeanewautomaticmilitaryt a r g e t detectionmethodu s i n g videoima gesequences - Proposea new aut om at i c m i l i t a r y targettr acki ng m e t h od usingvi deoi m age sequences Practicalsignificancesofthedissertation: - Thisdissertationservesasanimportanttheoreticalbasisfordevelopingmonitoring and tracking military target systems in order to improve or upgrade theoldgenerationweaponsysemsanddevelopnewhightechnologyweaponsystems - The study in this dissertation is also a solution using in replacing the monitoringpartsinhightechnologyweaponsystems Content of the dissertation: Opening, 04 chapters, conclusion andrecomendation,publishedworks,references CHAPTER OVERVIEW OF TARGET DETECTION AND TRACKINGFROMCONSECUTIVE FRAMES This chapter presents an overview of target detection and tracking by video imagesequences,andapproachesforsolvingtheproblemoftargetdetectionandtrackingbyvideosequences 1.1 Targetdetectionfromconsecutiveframes 1.1.1 Targetdetectionproblem This problem's input are imagescollected from camerasa n d i t s o u t p u t i s t h e target regions with images in input image frames In the automatic target detectionand tracking system, target detection is the first problem need to be solved It isconsidered asthe firststepintheprocessoftargettracking Phase1:Determinationofmathematicalmodelsfortargetrepresentation templat eimages themathemati calmodel fortargetrepr esentationbyi mage imagef eatures Phase2:Identificationofthetargetintheinputimages the input images image features specify the target target Figure1.1:Diagramoftargetdetectionprocedure This diagram is based on the research on target detection methods which havebeen published To specify the target in the input images; each of these methods hastwo phasesasshowninFigure1.1: Phase 1:Determination of mathematical models for target representation.Thisphase is performed on the template images to build the mathematical model fortarget representationbyimage features Phase 2:Identification of the target in the input images.This phase detects thetargetimageregionsontheinputimagesusingmathematicalmodelfortargetrepresentationi dentifiedinPhase1 The tracking section presents in detail the type of image features and the methodoftargetdetection withdifferent mathematicalmodels 1.1.2 Imagefeatures This section presents the types of image features commonly used to representobjectsin automatictargetdetection.Therearethreemain typesoffeatures: 1.1.2.1 Colorfeatures: Color features refer to one of the important features to characterize the surface ofthe target Color features of a pixelPis a vectorf= ( f1, f2,… ,fn),wherefiis thevalue of a color componentiin positionPin a certain color space or in manydifferent color spaces For an image regionsR,the color features commonly used torepresentRintargetdetectioniscolor histogram 1.1.2.2 Texturalfeatures Texturalf e a t u r e s i n d i c a t e t h e r e l a t i o n s h i p o f a g r o u p o f n e i g h b o r i n g p i x e l s ( a pixel with its neighboring pixels) which reflect the local structure of the object.Texturalfeaturescommonlyusedintargetdetectionincludethefollowing:i)Gradien t features; ii) Local binary pattern; iii) Haar-like features; iv) Frequencyspectrumfeatures 1.1.2.3 Shapefeatures Shapei s a n i m p o r t a n t f e a t u r e o f a t a r g e t w i d e s p r e a d u s e d i n a p p l i c a t i o n s f o r target detection and tracking by input video images Based on calculations, shapefeatures are classified into two main categories:i ) region-based shape f e a t u r e s which are selected based on the pixels located on the contour of object;ii) contour-based shape features selected based on the information of the pixels located on thecontourandinsidethe contourofthe target 1.1.3 Targetdetectionmethods Some authors classify the target detection method based on image features [52], [106], while many other authors classify based on mathematical model for targetrepresentation [87], [ 107], [108] In this thesis, we rely on both image features andmathematicalmodeltoclassifythesemethodsintofourcategoriesasfollows: 1.1.3.1 Targetdetectionbasedonimagesegmentation The methods of this groupi d e n t i f y t h e t a r g e t i m a g e r e g i o n s in the input i m a g e s byu s i n g i m a g e s e g m e n t a t i o n t e c h n i q u e s T h e s e i m a g esegmentationtechniques exploit information (color and texture) at the level of pixel to separate the inputimagesinto differentimageregionscontaining pixels withsimilarfeatures Remarks:Generally, segmentation-based methods have high accuracy in targetdetectionandhavesimpleprocessesforlearningtargetrepresentationmodelparame ters.However,therearesome shortcomings as follows: - Slowspeedofcalculation,becausetheprocessofimagesegmentationisrequiredtoco nsiderallthepossibilities ofeachpixel - Theefficiencyoftargetdetectiondependsgreatlyonimagesegmentationtechniques 1.1.3.2 Targetdetectionbasedonmotion This solves target detection by searching the moving image regions [52], [106].There are two main approaches to determine the moving image regions: 1)Opticalflow-based approach;2)Background model-based approach Remarks: -The advantages of the algorithms: adaption to the change of objects in thebackground, but low accuracy when the target is changed by different lightingconditions orsuddenspeed anddirection ofmovement - Background model-basedalgorithms have significantly high computing speedand efficiency where there are fewer changes in the backgrounds However, thesealgorithmshavelow accuracy intargetdetectionwhenbackgroundobjectsarestronglychanged 1.1.3.3 Targetdetectionbasedonclassifiers The detection methods of target detection in this group are based on supervisedlearningclassifierstoidentifythetargetimageregionsfromthebackgr oundpixelsinthe input images [125], [126], [127],[128],[129],[130] The supervised learningclassifiers widely used in target detectioninclude:i)Neural networks;ii) SVMSupport VectorMachines;iii)AdaBoost Remarks:The above methods are easy to implement and highly effective for thecases where the target has image features with high difference from the backgroundobjects.Their maindisadvantages are: - They require a large enough template data set of targets and background objectsfortraining.Thisisverydifficulttocollectenough - They will have low accuracy in target detection in case of small differencebetweentheimagefeaturesfortargetpresentationandimagefeaturesforpresent ation ofother backgroundobjects 1.1.3.4 Targetdetectionbasedontemplatematching Inthemethodsofthisgroup[131],[132],[133], [134],targetidentificationalgorithmhastwo mainsteps: - Step1: Constructionofspecificationsetsforthetargetoritscompositionsuch asthetemplateimage featuresfromlearningdatasets - Step 2:Input images are scanned by a sliding window whose image regions arerepresented by the image features and compared to a standard set of features of thetarget using measurements If the value of the measurement is large, that imageregion isthetarget;otherwiseitis abackground object Remarks:The above method is used relatively popularly because of its highaccuracy The effectiveness of this method depends largely on the set of featuresrepresenting target The biggest disadvantage of this method is its slow calculatingspeed, especially in case of large size and number of set of features representingtarget 1.2 Targettracking 1.2.1 Targettrackingproblem Target tracking is a problem of identifying the orbital motion of one or moretargets over time by localizing the target in each frame [52] The main features oftarget tracking include:Inputs:Theimagesequenceso v e r t i m e ; i n f o r m a t i o n about the target; information about background objects -Outputs:The position ofthetargetoforbital motioninthe input images 1.2.2 Targettrackingmethods Based on the features for target representation and models for orbital motiondemonstration of target [52], the target tracking methods are classified into threemain typesasfollows: 1.2.2.1 Point-basedtargettracking Point-based target tracking method presents the target in the image as a point(center point of the target) or a set of points (using features on the target contour).There are many algorithms for target tracking and they are divided into categories:DeterministicalgorithmsandStatisticalalgorithms Remarks:Theadvantageofpoint-basedtargettrackingalgorithm isitsfastcalculating speed, suitable for applications where motion speed and trajectory of thetarget change slowly over time However, these algorithms have low accuracy incaseofconstantlychangingorbitalspeedandmovementofthetarget.Ontheother hand, the use of information in some pixels to identify the target is sensitive tobackgroundnoise 1.2.2.2 Surfacefeature-basedtrackingmethods The methods of this class approximates the target image regions as a rectangularore l l i p t i ca l ones a n d use surf ace f e at ur e s ( col or and t ext ur e feat u r es)t or epr esent thetarget.Mostofthetraditionaltrackingmethodsusegrayscaleinformationtopresent the target andcross-correlation matching to identify the target.I n s t e a d o f justusingthegrayscalevaluesonly,recenttargettrackingmethodsuseacombination of manydifferentsurfacefeatures Remarks:Surface feature-based tracking methods solve the problem of targettracking similarly as target detection which is based on motion features, so that itadapts to the change in speed and direction of movement of the target However, theaccuracyandcalculatingspeedofthesemethodsdependlargelyontheselectionofthe image features for target presentation The accuracy in target tracking is low ifcolororgr ayscal e f e a t ur e s ar e used onl y whenl i g ht condi t i ons i nt he backg roundarechanged.Incaseoftoocomplicatedfeatures,thecomputingspeedwillbeslower 1.2.2.3 Shape-basedtrackingmethods Thesemethodscanbedividedintotwomaincategories - The first categoryuses a shape specification to present the target as the templatein the first frame based on the detected target and then applies templatematchingtechniques totrackthetargetinthe next frames - Thesecondcategoryrepresentsashiftinthespaceofthetargetcontoursbetween consecutiveframes inastate-space model Remarks:The shape-based tracking methods have high accuracy However, theyalsohave highcomplexityandslowcomputingspeed 1.3 Characteristicsofmilitarytargetdetectionandtracking Therearesomeoutstandingcharacteristicsinmilitarytargetdetectionandtrackingcom pared with civiliantargetdetectionand tracking asfollows: - Firstly,the colors of military targets are often similar to those of backgroundobjects such as grass and tree regions, making it difficult to separate the militaryfrombackgroundobjectsinthe images - Secondly,military target detection and tracking are often conducted at a distanceof hundreds of meters to kilometers; therefore the collected images often containmanybackgroundobjectswithnoise - Thirdly,mi l i t ary targetdetection and tr acki ngsyst em isrequi redt ohaver eal -timecalculatingspeedand highaccuracy Theabovecharacteristicsarealsotherequirementstosolvetheproblemoftargetdetection andtrackinginthis thesis 1.4 Theapproachof thethesis 1.4.1 Blockdiagramofmilitarytargetdetectionandtracking Military target detectionandtrackingsystemisdesignedtoconsistofthreemajorcomponentsasshownin the followingblockdiagram: Target location for each image Video image sequences Target detection Target tracking Figure1.5: Blockdiagramof militarytargetdetectionandtracking Image acquisition block:This block includes dedicated cameras capable ofcapturingdistantscenes withhigh imagequality Target detection block:based on video image sequences collected from imageacquisition block, this block is responsible for identifying the presence of militarytargets (people, tanks and military vehicles) in the scene Output of this block isinputtothe initialstep oftargettrackingblock Target tracking block:while the output of thedetection targetblock indicatesthe presence of military items in the scene, in the next video image sequence, thesystemwillstarttotrackthetarget andtargetdetection blockwillstop working 1.4.2 Orientationsforthetasksofthethesis Therefore, target detectionandtracking methodsb a s e d on images p r o p o s e d i n this thesis is required to solve the above-mentioned difficulties The mains tasks ofthethesisaredefinedas: Task1:Conductresearchanddevelopamethodtodetectmilitarytargetseffectivelywit hfastercomputingspeedfromremotelycollectedvideoimagesequences Task 2:From the target image regions identified in the first image sequences,conduct research and develop a military target tracking method with high computingspeedandaccuracyinthe next videoimage sequences 1.4.3 Solvingorientationsformilitarytargetdetectionandtrackingproblems (d) Targetmaskandtargetfoundontheinputimage Image2.3.Theresultofeachstep in Algorithm1 2.2.2.2 Imagesegmentationalgorithm Using the graph based segmentation algorithm given in [68], we segmented thesourceimagesintothehormogeneouscolorregions.Thisalgorithmishighlyaccurate and quicklycalculated.TheresultofsegmentedregionsisdescribedinImage 2.3c 2.2.2.2 Imagefeaturesextraction Extractionbyusingcolorfeatures:Colorfeatureofanimage’sregionS kS,isa vectorck= {rk,gk,bk}, whererk,gk, andbkare the Red, Green and Blue averagevalues of all image points in the regionSk To measure the color similarity of theimage regionXwith object classOneeded to be found, we use the functiong(X,O)below: 𝑔(𝑋,𝑂)= ∑𝑅∈𝑋𝑝𝑑ƒ(𝑐 | (2.11) 𝑅 |𝑋| 𝑂) Inthef o r m u l a ( 1 ) , | X|sumofthehormogeneouscolorregionsinX, 𝑝𝑑ƒ(𝑐𝑅|𝑂)–is the class conditional probability density function of the color vectorcRof the target in the classOwhich was identified by using the data of the learningpatterns Extractionbyusi ng s pe f eatures :S pe feat ures of t he targeta r e identif i ed by using the shape context proposed in [69], they are invariant whether or not theobject is rotated, moved, distorted or changed in proportion The shape feature𝑠of atargetcontainstheshapecontextofallthe image points located on the outer borderof the object Take a target with𝐾sample points𝑝1, 𝑝2, 𝑝3…𝑝 𝑘on the boundary.The shape context of point𝑝iis a diagram of the polar coordinates realativelybetween𝑝iandthe rest𝐾−1pointson theboundary: ℎi𝑘= # *𝑞G 𝑝 i:(𝑞−𝑝 i)∈bin(k) + (2.13) Thedifferencebetweenthetwoshapecontexts oftwopoints𝑝and𝑞is calculatedasbelow: 𝐶(𝑝,𝑞)= ∑ (ℎ i− ℎi) 𝑝i+ ig 𝑀 i=1 ℎ ℎ 𝑝 (2.14) g 𝑇 =*𝑇1, 𝑇2, 𝑇3, …+- set of the patterns of the target With each image regionX(itmay contain more than one hormogeneous color regions), the shape feature𝑠𝑋o f 𝑋is the shape contexts of the sample points located on the outer boundary of𝑋 Theshape difference between and image region𝑍and a sample pattern𝑇of the target iscalculated asbelow: 𝐷(𝑠, 𝑇)= � |𝑠𝑋| ∑𝑝∈𝑠𝑋 �𝑞) , (2.15) min𝑞∈ 𝑇𝐶 (� � Informula(2.15),|𝑠𝑋|-sumofthesample pointsinX Function for calculating the shape feature similarity of an imange region𝑋withobject classOisshownbelow: 𝑠(𝑋,𝑂)=𝑒𝑥𝑝,−𝛿 min𝑇∈𝑇𝐷(𝑠𝑋,𝑇)(2.16) In formula (2.16),𝛿– parameter of proportion, which was defined by usiong thesamplepatterndata Combiningtheimagefeatures: After extracting the color and shape features above, we calculate the funtionf(X,O)in (2.4) and (2.5), then measure the similarity betweent h e i m a g e r e g i o n Xand objectclassOasbelow: ƒ(𝑋,𝑂)= 𝛼 𝑔 (𝑋,𝑂)+ 𝛽𝑠(𝑋, 𝑂) (2.17) Wherethe parameters𝛼and𝛽are the positive weight (𝛼, 𝛽 > 0) used to measurethe significance of image feature for measuringthe similarity betweenthe imageregionXandobjectclassO 2.2.2.3 Algorithmforoptimalimageobject As shown above, to find the set of𝑍in formula (2.4) we could use the method ofexhaustion by finding all the subsets in𝑆.However this method requires long timecalculatrion, and its complexity is𝑂(2|𝑆|)with|𝑆|i s t h e s u m o f a l l 𝑆components.To reduce the calculating time, we suggest an algorithm to add or remove optimalregions asbelow: Algorithm3:Addingorremovingregionsalgorithm Input:Setofhormogeneouscolorregions𝑆= *𝑆1 ,𝑆2 ,…,𝑆𝐿 + Output:Imageregion𝑍ƒ(𝑍,0)thenZ➛ Z −𝑆− Else 𝑇𝑒𝑚𝑝➛W𝑟𝑜𝑛𝑔 Endif Endwhile Algorithm complexity:O(2logL), L – number of objects in the setof hormogeneouscolorregions InAlgorithm 3,in each step of adding or removing region𝑆ito/from𝑍theinterconnection of the set*𝑍 𝖴 𝑆i} and*𝑍 − 𝑆i} is checked A set of components isconsidered to be interconnected if it connects all its components to make one biggerimageregion 2.3 Experimentsandresults 2.3.1 Imagedatasource To assess the proposed method, we collected 03 video data files with 03 differentmilitary targets: tank, truck and UAZ-469 In particular, the video data for tankcontains 102 files, video data for military truck contains 128 files and the video datafor UAZ-469 contains 101 files Ech video file corresponds with onescene, withappoximately 3000 frames For each type of targets, we chose 2/3 ofthe images tobuild the training data and 1/3 to assess th result With each data image, we pick thetarget manually to make the target data file, these files areground_truth,used toassess theobjectdetectionmethods 2.3.2 Methodusedtoassesseffectivenessoftargetdetection To assess the effectiveness of target detection for each image, we compared theregion where target is detected by using algorithm with the region where target isdetected manually in the data fileGrount_truth.03 measurementsrecall,precisionandF-measurewereusedtoassess theeffectiveness ofthealgorithm 2.3.3 Result 2.3.3.1 EffectivenessanalysisofusingROIextraction The proposed method is developed based onthe work done by us in[P7].Method being used in [7] is the same as the proposed withoutROI extraction.Toassess the effectiveness of using ROI extraction, method in [P7] is being installedandtests on03datafile.Theresults areshown intheTable2.1,2.2andTable2.3 Table2.1.ResultoftargetdetectiontestedondatafileofUAZ-469 No Method Without ROI extraction[P7] Proposedmethod Recall (%) Precision F_measure (%) (%) Time(s) 78,2 92,5 84,7 1,25 90,3 97,6 93,8 0,38 Table2.2.Resultoftargetdetectiontestedondatafileofmilitarytruck No Method WithoutROIextraction [P7] Proposedmethod Recall (%) Precision F_measure (%) (%) Time(s) 73,1 75,8 73,9 1,28 89,3 96,6 92,8 0,41 Table2.3.Resultoftargetdetectiontestedondatafileoftank No Method Without ROI extraction[P7] Proposedmethod Recall (%) Precision F_measure (%) (%) Time(s) 83,1 90,2 86,5 1,26 92,2 95,9 94,0 0,39 2.3.3.2 Comparisionwithothermethods Proposed method is compared with 04 other typical methods, widely used fortarget detection in video image sequences: 1) Basic background subtraction method(BBS)[101];2)TargetdetectionmethodbasedonSingleGaussianModel(SGM) [102];3)TargetdetectionmethodbasedonmultipleGaussiansmodels(MGM); 4) Target detection method based on Lehigh Omnidirectional TrackingSystem (LOTS) [104] The results after testing on 03 data file are shown in theTables 2.4,2.5and2.6 Table2.4.Resultoftargetdetectiontestedondata fileof UAZ-469 No Method BBS[101] SGM[102] MGM [103] LOTS[104] Proposed method Recall (%) 68,1 78,1 86,6 88,7 Precision (%) 75,6 82,8 88,5 90,8 F_measure (%) 71,7 80,4 87,5 89,7 90,3 97,6 93,8 Time(s) 0,15 0,19 0,27 0,25 0,38 Table2.5.Resultoftargetdetectiontestedondata fileofmilitary truck No Method BBS[101] SGM[102] MGM [103] LOTS[104] Proposed method Recall (%) 58,1 64,1 76,6 78,7 Precision (%) 65,6 79,8 74,5 85,8 F_measure (%) 61,6 71,1 75,5 82,1 89,3 96,6 92,8 Time(s) 0,17 0,20 0,26 0,28 0,41 Table2.6.Resultoftargetdetectiontestedondata fileoftank No Method BBS[101] SGM[102] MGM [103] LOTS[104] Proposed method Recall (%) 61,1 68,1 74,6 81,7 Precision (%) 68,6 81,8 79,5 87,8 F_measure (%) 64,6 74,3 77,0 84,6 92,2 95,9 94,0 Time(s) 0,18 0,21 0,28 0,26 0,39 Conclusion:In chapter a new method is proposed It is is highly effective formilitarytargetdetectioninthevideoframe.Thesciencecontributionsoft h i s method are: Proposeamethod fortargetdetectionbyusingROIs Proposea m a t h e m a t i c a l m o d e l f o r t a r g e t d e t e c t i o n f r o m t h e h o r m o g e n e o u s colorregion(supperpixel),usingthecombinationofcolorandshapefeatures Propose an optimal algorithm for target detection from the hormogeneous colorregion ChapterIII:TARGETTRACKINGBYUSINGONLINELEARNING SAMPLEFEATURES 3.1 Introduction MilitarytargertrackingProblemcanbesolvedbyusingtargetdetectionalgorithm in Chapter Based on the target position in the image framet-1 and themaximumspeedofmovementofthetarget,weidentifiedthetargetareacontai nedin the region W oftframe Then by applying Algorithm in chapter the targetfromtheimageregionsthathavethesamecolorinWisbeingd e t e r m i n e d How ever, the calculating speed of this method is not fast enough to locate thepositionoftargetsin allcollectedframes This chapter presents new target tracking method The proposed method usesthe advantages of the object tracking methods based on target’s shape andtarget’ssurface features Specifically, the proposed method uses the shape features and colorfeaturestorepresentthetarget.Positionofthetargetinthenewpictureframesislocated by pattern matching techniques The samples which represent image featuresof targetare beingupdatedonline from targettrackingresultsineachframe.Moreover,toincreasethecalculationspeed,thisthesis usesthetargetmotionfeatures(eg,speedofmovement),andtheprobabilisticclassification modeltoreducethetarget’ssearchingspaceinthenewimageframeinsteadofsearchinginthewholespace 3.2 ProposedMethod Theproposedmethodisdescribedinthefollowingalgorithm: Algorithm4: Targettrackingalgorithmbasedononlinelearningsamplefeatures Input:Video Frame t,Ft -PostionofT a r g e t regioninVideoFrameFt-1 Output:PostionofT a r g e t LtinVideoFrameFt ConductImagepreprocessingforFt DefineROIfromtargetpositionLt-1forimageframe Ft-1và Ft CalculateorUpdateclassconditionalprobabilitydensityfunctions 3.1 CalculateorUpdatefunctionpdf(c|O)basedonpixelsbelongingtotargetinROI ofimagewithinframeF t-1 3.2 Calculate or Update function pdf(c|O) based on pixels that does not belongtotargetinROI ofi m a g e w i t h i n frame Ft-1 Calculateo r u p d a t e t h e s a m p l e i m a g e f e a t u r e s f r o m t h e t a r g e t a r e a o f t h e imagewithintheframe Ft-1 ExtractthepixelsbelongingtotargetinROIofFt( c a l l e d POI) ExtractimagefeaturesforeachpositionofPOI Definet a r g e t positionLtb y templatematchingofimagefeatures Algorithmcomplexity:O(n*m*N obj*M obj),wheren*m–sizeoftheinputframe, Nobj* Mobj–size oftarget image 3.2.1.ImagePreprocessing Tominimizetheimpactoflightsources,weuseHomomorphicfiltersforpreprocessing stepsasshownin Figure3.2 f(x,y) Log DFT H(u,v) IDFT exp g(x,y) Figure3.2Homomorphicfilterblockdiagram 3.2.2 DefinePOIPositions Theprocessofdefiningthepixelsbelongingtotarget,thePOI(pointofinterest), are made by 02 major steps: 1)Determine the ROI image regionwheretargets can be existed based on the maximum movement speed ; 2)Extract the POIsinthe ROI byclassifyingtechnique 3.2.2.1 DefineROIregion ROI region is defined as a square image area or circle whose center is the center ofthet a r g e t p o s i t i o n i n t h e p r e c e d i n g i m a g e f r a m e s , i m a g e f r a m e t I n t h e t h e s i s , ROIisasquarewiththedistancefromthecentertotheedgeRisgivenby(3.3).Figure 3.4 shows ROI defining for frame t, based on the objectives identified in theimageframe t-1 RVmDt (3.3) ROIđượctríchchọn (a) Targetwasdefined inFramet-1 (b)R O I Regionwasdefinedin Framet Figure3.4 ROIExtractionExample 3.2.2.2 POIExtraction DefiningP O I p o s i t i o n s i n R O I r e g i o n w a s c a r r i e d o u t b y c l a s s i f i e d t e c h n i q u e df (c|O)andpdf (c| non_obj)are the class condition probability density functions ofthe color (in the color space R, G, B) for two classes:the targetand thebackground.Thesefunctionsareestimatedbyusingthe3Dcolorhistogramcalc ulatingmethodofthepixelsinthetarget;andpixelsinthebackgroundofROIfromthepreviousframe t-k, , t-1 Initially, this function is calculated from the ROI by using theresultsoft ar getdet ecti on m e t h o d usedin Chapt er Then, theyareu pd at e df r om theROIdetermined bytarget trackingin the previous frame (a) ROIwasextractedin Framet (b)MaskofPOIsextracted by(3.4) Figure3.6 IllustrationofPOIextrationResult (c)MaskofPOIsafterbinarized