1. Trang chủ
  2. » Luận Văn - Báo Cáo

Characters extraction for hannom stele images

71 7 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 71
Dung lượng 3,67 MB

Nội dung

H.C.M CITY UNIVERSITY OF TECHNOLOGY FACULTY OF INFORMATION TECHNOLOGY MASTER’S THESIS SUPERVISORS : Prof Dr MARC BUI Prof Dr CAO HOANG TRU STUDENT : HO XUAN NGUYEN CLASS : Master 2005 STUDENT ID : 00705147 H.C.M City July 2007 ABSTRACT Text is a very powerful index in content-based image and video indexing such as inscriptions in stele images and demonstrations in video frames Although, there are many systems and algorithms proposed for localizing and extracting text, however, for our specific problem (text in stele images), these are not convenient due to the algorithms center on other particular databases Therefore, for getting this information we propose an automatic system which could detect, localize texts and extract them to characters associated to metadata with a good performance Our system treats for grayscale image input and HanNom characters as output, and contains four stages First of all, a noise reduction stage is designed based on a morphological operator to enhance the input image And then following this stage, a text detection and localization stage (coarseto-fine text detection) is applied by using a discrete Wavelet transform (Haar DWT), median filtering and thresholding techniques Besides these, a combination of the connected component analysis and morphological operators is used for fine detection After eliminating non-text components, a density-based region growing and a splitting line framework are developed to collect all single like-text lines The next stage, text line verification, we propose a two-step algorithm to remove fake text lines: thresholding verification and neural network-based verification steps Finally, a character segmentation stage is proposed by applying a genetic algorithm to find the best non-linear segment if having some touching and kerned characters after using projection profile-based segmentation ACKNOWLEDGEMENTS Most of all I would like to express my greatest appreciation to my supervisor, Prof Dr Cao Hoang Tru, for his supervision and kind helps during my study I would like to thank him for guiding me into this research area, and sharing with me many insightful experiences on doing good research, especially reading through the thesis and suggesting corrections I would like to thank Prof Dr Duong Nguyen Vu for his support and evaluation and Prof Dr Marc Bui for the useful discussions on variety of image processing and other topics I also would like to thank Prof Dr Christian Wolf for his invaluable help with many technical problems I am also grateful to Mr Tran Giang Son, with whom I first had the opportunity to acquaint with in HanNom characters recognition area And sincere thanks go to all my friends and colleagues for helping me to further experiment on the large databases Finally, I would like to thank all my family for passing to me their passion for learning and research: thank you for your encouragement through all my life, for your love and unconditional support in all my doings Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen TABLE OF CONTENTS List of Figures III List of Tables V Chapter 1: Problem Definition 1.1 Motivation 1.2 Problem Area .1 1.3 Objective and Scope 1.4 Contributions Chapter 2: Literature Review 2.1 Image Enhancement 2.2 Character Areas Detection and Localization .6 2.2.1 Region-based Methods .6 2.2.2 Texture-based Methods 10 2.2.3 Text Extraction in Compressed Domain .13 2.2.4 Other Approaches 13 2.3 Character Areas Verification 14 2.4 Character Extraction 14 Chapter 3: Related Background Knowledge 17 3.1 Discrete Wavelet Transforms 17 3.2 Artificial Neural Networks 18 3.3 Genetic Algorithms 19 3.4 The Basic Image Processing Techniques .20 3.4.1 Morphological Operators for Binary Image 20 3.4.2 Projection Profile Analysis .22 Chapter 4: Text Detection and Character Extraction System 23 4.1 Image Enhancement Stage .23 4.2 Text Detection and Localization Stage 24 I Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Fig 4.14 The proposed character extraction algorithm 43 Fig 4.15 The SPZ of a non-linear segmentation path 44 Fig 4.16 The cross operator for the genetic algorithm .46 Fig 4.17 The mutation operator for the genetic algorithm 46 Fig 4.18 The extracted characters (final results) for our system 47 Fig 5.1 The result images of the text localization in different algorithms (sample image #1 in the stele image database) 51 Fig 5.2 The result images of the text localization in different algorithms (sample image #2 in the stele image database) 51 Fig 5.3 The result images of the text localization in different algorithms (sample image #1 in the video frame database) 53 Fig 5.4 The result images of the text localization in different algorithms (sample image #2 in the video frame database) 53 Fig 5.5 The results of the character segmentation in different algorithms 55 Fig 5.6 The result images of the proposed system .56 IV Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen 4.2.1 Coarse Detection 24 4.2.2 Fine Detection .28 4.3 Text Line Verification Stage 36 4.3.1 Text Line Verification by Thresholding Technique .36 4.3.2 Text Line Verification by Neural Networks 39 4.4 Character Extraction Stage 42 4.4.1 Merging Regions Procedure 44 4.4.2 Searching a Non-Linear Segmentation Path 44 Chapter 5: Experiments and Comparisons 48 5.1 Text Localization Evaluation 48 5.1.1 Evaluation and Comparison on Stele Image Database 50 5.1.2 Evaluation and Comparison on Video Frame Database 52 5.2 Character Extraction Evaluation 54 5.2.1 Evaluation and Comparison on Stele Image Database 54 5.2.2 Evaluation and Comparison on Video Frame Database 55 5.3 Full Evaluation of the Proposed System 56 Chapter 6: Conclusions and Future Works 57 6.1 Conclusions 57 6.2 Future Works 57 Chapter 7: References 59 II Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen List of Figures Fig 1.1 Fundamental concepts in image processing Fig 1.2 The outline of our program Fig 1.3 An illustrated example for character segmentation Fig 2.1 A system of character extraction .5 Fig 2.2 An approach of Clark and Mirmehdi [13] .13 Fig 2.3 The basic segmentation in Hong’s approach [25] 15 Fig 2.4 The fine segmentation in Hong’s approach [25] .16 Fig 3.1 An example of DWT tree 17 Fig 3.2 The DWT for image decomposition 18 Fig 3.3 An artificial neural network 19 Fig 3.4 The genetic algorithm [15] 20 Fig 3.5 An example for applying erosion operator [16] 21 Fig 3.6 An example for applying dilation operator [16] 21 Fig 3.7 An example for projection profile analysis [67] 22 Fig 4.1 The flow chart of our system for character extraction 23 Fig 4.2 An example for applying our image enhancement 24 Fig 4.3 The 2-D Haar DWT for an image [37] 25 Fig 4.4 The wavelet energy image (a combination of LH, HH and HL subbands) 26 Fig 4.5 The thresholding image after applying median filtering and thresholding techniques on the wavelet energy image .28 Fig 4.6 The flow chart for applying CCA and opening operator to remove non-text regions .29 Fig 4.7 The result image after applying CCA and opening morphological operator 30 Fig 4.8 The density-based region growing algorithm 30 Fig 4.9 The text regions found by applying the density-based region growing algorithm 31 Fig 4.10 The proposed splitting line algorithm 32 Fig 4.11 An illustrated example for the splitting line framework .34 Fig 4.12 The output image of the text detection and localization stage 35 Fig 4.13 The output image for the text line verification stage 42 III Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen List of Tables Table 4.1 Choosing thresholds for the coarse detection 27 Table 4.2 The thresholds for CCA to remove noise 29 Table 4.3 The thresholds for the density-based growing algorithm 31 Table 4.4 The thresholds for the FilterSmallRegion procedure 33 Table 4.5 The thresholds for the splitting line framework 33 Table 4.6 The thresholds for the FilterLargeRegion procedure 35 Table 4.7 Sample text lines and number of pixels in different kinds of directions .38 Table 4.8 Sample text lines and the WWGVD values in different kinds of directions .38 Table 4.9 The thresholds for the fill factor and the non-direction factor of text line 39 Table 4.10 The number patterns and images for training neural network 41 Table 5.1 Experimental results on the text localization stage for the stele image database 50 Table 5.2 Experimental results on the text localization stage for the video frame database52 Table 5.3 Experimental results on the character segmentation stage for the stele image database 54 Table 5.4 Experimental results on the character segmentation stage for the video frame database 55 Table 5.5 Experimental results on the proposed system for the two databases 56 V Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Chapter 1: Problem Definition 1.1 Motivation The Vietnamese people are accustomed to saying: “one remembers the source from which one drinks the water” (“Uống nước nhớ nguồn”) [72] One activity of this tradition is the erection of steles with inscription of all the names, birth dates and birth places of doctors, and other excellent graduates who took part in examinations since 1442 At present there remain many steles written by HanNom characters, standing in the premises of Van Mieu and others Some of them had eroded through many years from past to now seriously To promote a plan for maintaining information (inscriptions) for these stele images, as two verses found in the following popular song (“ca dao”) [72]: The stele of stone erodes after a hundred years The words of people continue to remain in force after a thousand years (Trăm năm bia đá mịn Ngàn năm bia miệng trơ trơ) For much progress in this plan, a character extraction needs to apply on all stele images in order to get all characters before character recognition and storage are started With this reason, we want to propose our research work here - “Development of an image segmentation software dedicated to ancient inscriptions of VietNam” - this means to build an image segmentation algorithm for HanNom stele image This research has an immediate and motivating application in a philology research framework which aims at providing a modern research tool, using up-to-date information and communication technologies for the historical knowledge of VietNam 1.2 Problem Area The problem of extracting character information from visual clues has attracted wide attention for many years This problem is special to involve in image processing techniques, especially on image segmentation To have a look through image processing techniques as well as image segmentation technique, we present a brief summarization for fundamental concepts in this area [21] Fig 1.1 Fundamental concepts in image processing Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen A knowledge base can be considered as a form which contains information about a problem domain, and compacted into an image processing system This knowledge may be whether simple or not, this is simply as detailing regions of an image In that, all concerned information is known to be located, therefore for seeking information, the search will be limited and performed in a short time However, in some cases, it also becomes quite complicated, for example an image database which having high-resolution satellite images of a region And this database keeps in touch with change-detection applications [21] Image acquisition is known as the first process in an image processing system and shown in Fig 1.1 Generally, the image acquisition stage is about pre-processing, like as scaling In some cases, this process is just simply as being given an image which is already in digital form Image enhancement is a simplest process in image processing, but an appealing area and much important Usually, this stage is able to bring out detail and make the problem so clearer Sometimes this just highlights some features or properties in an image to achieve a good input for next processing stages such as contrast or brightness modification For some reasons, image segmentation is considered as the most important stage and a difficult task in image processing This technique has been widely used to analyze and examine the content of an image in order to get meaningful knowledge This is also applied in image classification with aim to partition an image into a set of disjointing regions whose characteristics (intensity, color and so on) are similar Generally, in this stage, with its important role, it can be said that “the more accurate the segmentation, the more likely recognition is to succeed” After the segmentation stage done, representation and description stage almost always follow Usually, in this stage, the data format is concerned in raw pixel data, constituting either the boundary of a region or all the pixels in the region itself In either case, needs to apply a data conversion, this transfers the original data to a form suitable for computer processing The data representation must need to consider as the first decision to and is based on some properties following [21]: • Boundary representation is convenient for the problems which relate to external shape characteristics, such as corners and inflections • With internal properties (texture, skeletal shape), the regional representation will be a suitable choice Besides this selection, we also show a specific method as a way for the data description so that features of interest are highlighted Recognition - can be as the last in image processing area with aim to label an object for recognition such as “character”, “face”, these are the top interesting subjects now After having an overview of image processing techniques, we want to focus on image segmentation technique deeply, where our research works in Besides this, for achieving a good result, we also have to examine many involved techniques such as image enhancement… which will be described more details in next sections Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen In this research, we use the evaluation method which is presented by Lienhart and Wernicke [41] First, we have to construct the ground truth data which is created and formed as text bounding boxes (rectangles) by hand And then, we implement the evaluations and comparisons depend on two types of performance: pixel-by-pixel and boxby-box For each type of performance, the hit, miss and false hit rates are defined differently, we enter into particulars as below [41] For pixel-by-pixel based method, we calculate these rates based on the number of pixels of the ground truth data and detected bounding boxes as a formula (5.1) hitrate pixel −based = 100 Υ Υ a∩g ∑ g g∈G a∈A ∀g∈G missrate pixel −based = 100 − hitrate pixel −based falsehits pixel −based = (5.1) 100    ∑ a − Υ Υ a∩g   ∑ g  ∀a∈A g∈G a∈A ∀g∈G where A = {a1 , a2 , , a N } and G = {g1 , g , , g M } are the sets of pixels representing the automatically created text boxes which are produced by our system, and the ground-truth text boxes (by hand) of size N = A and M = G , respectively Besides these, the a and g are the number of pixels in each text box, and a ∩ g is the set of joint pixels in a and g However, for box-by-box based method, the hit, false hit, and miss rates relate to the number of detected boxes that match with the ground truth A text bounding box A which is created by our system and considered as matching a ground truth text bounding box G if and only if the two boxes overlapped by at least 80% hitratebox −based = 100 ∑ max{δ (a, g )} M g∈G a∈A missratebox −based = 100 − hitratebox −based falsehitsbox −based = (5.2) 100    N − ∑ max{δ (a, g )} g∈G M  a∈A  1, if ( a ∩ g / a , a ∩ g / g ) ≥ 0.8 where δ (a, g ) =  0, else  As we presented, the stele image and video frame databases are used for evaluation and comparison The primary stele image database contains 40 large images (2523×3500) with 23403 HanNom characters, all of these images are very diverse, and have large noise and themselves marblings like texts And the 417 images (288×352) are extracted from video and used as the secondary database The secondary database with 18046 Chinese characters is selected from easy to hard to localize text with complex backgrounds 49 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Besides these, two algorithms are implemented by us according to the referenced papers [37, 62] for comparison Further, for proving the effectiveness in the proposed text verification stage, we also show the results of the text region verification algorithm for each step (the thresholding and neural network verifications) 5.1.1 Evaluation and Comparison on Stele Image Database Table 5.1 Experimental results on the text localization stage for the stele image database The Table 5.1 shows the results after testing on the stele image database For our algorithm, the hit rate in box-based and pixel-based is 83.6% and 88.7%, respectively Meanwhile, the two algorithms (Wu’s algorithm [62] and Liang’s algorithm [37]) can’t localize text regions because of the stele marblings The contrast between the marbling and background seems to be the same contrast between text characters and this background Nevertheless, in our algorithm the marblings can be eliminated by using CCA and the morphological operator Besides these, the experimental results also prove that using both two verification steps is better than using only one verification step This thing also substantiates our features selection is good and avoids the decreasing of the generality of the classifier Some illustrated examples are shown in the Fig 5.1 and Fig 5.2 below (mapped back to the original images) 50 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Fig 5.1 The result images of the text localization in different algorithms (sample image #1 in the stele image database) Fig 5.2 The result images of the text localization in different algorithms (sample image #2 in the stele image database) 51 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen 5.1.2 Evaluation and Comparison on Video Frame Database Table 5.2 Experimental results on the text localization stage for the video frame database For the video frame database which has high quality, three algorithms equally give the good results However, in our algorithm for getting text pixels, we use the density-based region growing algorithm, this is much better than morphological algorithms which are used in two competitive algorithms Moreover, the splitting line framework and the text line verification stage are applied in our algorithm for separating multi-line text region into one-line regions and verifying text lines that give the efficient results like Fig 5.3 and Fig 5.4 (mapped back to the original images) 52 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Fig 5.3 The result images of the text localization in different algorithms (sample image #1 in the video frame database) Fig 5.4 The result images of the text localization in different algorithms (sample image #2 in the video frame database) 53 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen 5.2 Character Extraction Evaluation Like the text localization stage, we must define the evaluation criteria for this character segmentation stage Unfortunately, until now we have no evaluation and comparison for character segmentation method [29] However, some authors use OCR systems to evaluate and compare the algorithm performance This evaluation is based on the recognizable characters which can be recognized by the OCR systems With our problem that relates to HanNom characters thereat using the OCR systems is impossible due to no OCR system for HanNom character now Nevertheless, we can evaluate the performance by applying the box-by-box based method like the performance evaluation protocol of the text localization stage However, we must notice that in the character segmentation stage we use the non-linear path for segmenting characters, hence the text bounding box A which is created by our system might not a rectangle box And here, the hit, false hit, and miss rates can be defined as the formula (5.2) In this section, in order to compare our algorithm and competitive algorithms, we also build two other algorithms which are based on the referenced papers [25, 67] Our algorithm and these two algorithms are evaluated and compared on a set of real text lines not including false alarms to prevent the bias Moreover, these text lines are extracted from the text localization stage which is based on our proposed algorithm 5.2.1 Evaluation and Comparison on Stele Image Database Table 5.3 Experimental results on the character segmentation stage for the stele image database For the stele image database, all algorithms work fine due to the spaces of characters are determined clearly (black background) However, the Zhang’s algorithm [67] can’t work in some specific cases because of these cases fall into touching or kerned characters while this algorithm does not take care (Fig 5.5) 54 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen 5.2.2 Evaluation and Comparison on Video Frame Database Table 5.4 Experimental results on the character segmentation stage for the video frame database The same as we analyzed, in the video frame database, these images have complex backgrounds Therefore, the Zhang’s algorithm [67] has encountered troubles (gives the undesired results) when segmenting characters (Fig 5.5) Fig 5.5 The results of the character segmentation in different algorithms 55 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen 5.3 Full Evaluation of the Proposed System In the previous sections, we report separately the experimental results for each stage text localization and character segmentation Finally, we implement an evaluation for our full system (text localization and character segmentation) according to character-based method like the performance evaluation protocol of the character segmentation stage We use these rate definitions in the formula (5.2) to evaluate the performance of the proposed algorithms However, for complete evaluation, all results (including false alarms) from the text localization stage must be brought in it The Table 5.5 and Fig 5.6 present experimental results of the proposed system on two databases Table 5.5 Experimental results on the proposed system for the two databases Fig 5.6 The result images of the proposed system 56 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Chapter 6: Conclusions and Future Works 6.1 Conclusions Text in images and videos is known as one of the essential information which could help us understand about its content clearly Therefore, automatic extraction of text would be useful in content management area In this research, we have discussed and presented a system for text localization and character extraction that could automatically treat a huge corpus of stele images and video frames This system has tested, reported the promising results (Table 5.5), and based on a multi-stage design Firstly, the proposed system presents the way to improve the quality of an image by using an erosion morphological operator Like common image enhancements, this way just needs to apply for low quality databases and then the enhanced images go through the text localization stage to determine text regions In addition, our system is not only able to localize found text regions but also verify and form these text regions in line by line for getting a fine character extraction process The text localization stage is proposed as a coarse-to-fine framework The coarse detection uses the Haar DWT and the formula (4.3) for calculating the wavelet energy in order to localize text Hence, this stage effectively reduces the complexity of data processing and produces the efficient results Moreover, in this text localization stage, the median filtering and thresholding techniques are applied to improve the quality of images before processing by the fine detection The fine detection is considered as a combination of the connected component analysis and morphological operators in order to eliminate non like-text components And the last of this fine detection, we use the density-based region growing algorithm which is used to group text pixels, and then we can get all single like-text lines by the splitting line framework Besides these, the verification stage with two sub-stages (thresholding technique and neural network) is used to verify text regions which could give better results First of all, in the thresholding verification, we extract three features of a text line (the fill factor, nondirection factor and weighted wavelet gradient vector direction) which features are used to compare with specific thresholds to discard fake text lines Second, the first-order statistics (in wavelet domain), text line periodicity and contrasts between text and its background features are used as the inputs of the neural network to completely eliminate fake text lines Finally, in the last stage of the proposed system, the characters are extracted according to getting a non-linear segmentation path by using the genetic algorithm and the projection profile analysis For improving the accuracy of this stage, some dimensional heuristics about HanNom (or Chinese) character are also utilized 6.2 Future Works We presented the contributions of this research and also reported the effective results (on experiments) in the previous sections However, with the limited research time and our ability, as we presented before this scope research just focuses on the images with the same size and vertical writing style of character, and concerns with the images having frontoparallel view Therefore, in the future we also plan to solve these limitations with some directions are listed below: 57 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen • First, to detect and localize these text characters with different font sizes, we use a system is built on a multi-scale scheme [49], this system utilizes the multi-scale wavelet feature to detect text with different font-size correctly • Second, to localize text with arbitrary orientation or having non-fronto-parallel view, we can employ the algorithm in [14], with each orthogonal bounding rectangle is created for each text component, and then we can refine this bounding rectangle by moving, changing size, and changing orientation • The last, to treat other languages (not only HanNom or Chinese character) like English, we can customize the splitting line framework we proposed by analyzing and estimating the projection profiles of text regions to handle the change of width or height of character (like ‘d’ and ‘a’) Finally, as the epilogue of this research, we hope all the limitations and future works can be solved and then achieved a good performance in the near future 58 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Chapter 7: References [1] S Antani, “Reliable Extraction of Text From Video”, PhD thesis, Pennsylvania State University, August 2001 [2] D Ashlock and J Davidson, “Genetic algorithms for automated texture classification”, The International Society for Optical Engineering, 1997 [3] D Ashlock and J Davidson, “Texture synthesis with tandem genetic algorithms using nonparametric partially ordered Markov models”, International Society for Optical Engineering, 1995 [4] D.G Bailey, “Detecting regular patterns using frequency domain self-filtering”, In Proc Int Conf on Image Processing, volume 1, pages 440.443, Washington DC, USA, 1997 [5] E.O Brigham, “The Fast Fourier Transform”, Prentice-Hall, 1974 [6] D Chen, “Text detection and recognition in images and video sequences”, Thèse sciences Ecole polytechnique fédérale de Lausanne EPFL, no 2863 (2003) Dir.: Jean-Philippe Thiran [7] P Chen and C.W Liang, “Automatic Text Extraction Using DWT and Neural Network”, (168 Gifeng E Rd., Wufeng, Taichung County, Taiwan, R.O.C.) 2001 [8] P Y Chen and E C Liao, “A new algorithm for Haar discrete Wavelet transform”, IEEE International Symposium on Intelligent Signal Processing and Communication Systems, 21, 24: 453-457, 2002 [9] D Chen, J Luettin and K Shearer, “A Survey of Text Detection and Recognition in Images and Videos”, Institut Dalle Molled’Intelligence Artificielle Perceptive (IDIAP) Research Report, IDIAP - RR 00 - 38, August 2000 [10] D Chen, J Odobez and H Bourlard, “Text Segmentation and Recognition in Complex Background Based on Markov Random Field”, Proc of International Conference on Pattern Recognition, 2002, Vol 4, pp 227 - 230 [11] C.K Chui, “An introduction to Wavelets”, Academic Press, 1992 [12] B.T Chun, Y Bae and T.Y Kim, “Automatic Text Extraction in Digital Videos using FFT and Neural Network”, Proc of IEEE International Fuzzy Systems Conference, 1999, Vol 2, pp 1112 - 1115 [13] P Clark and M Mirmehdi, “Finding Text Regions Using Localised Measures”, Proceedings of the 11th British Machine Vision Conference, pages 675 - 684 BMVA Press, September 2000 [14] D Crandall, “Extraction of unconstrained caption text from general-purpose video”, Master's thesis, The Pennsylvania State University, 2001 [15] T.A Duong, “Genetic Algorithm”, Course Slide, University of Technology, HCM city, VietNam 59 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen [16] N.V Duong, “Image Processing Techniques”, Course Slide, University of Technology, HCM city, VietNam [17] J Duong, M Côté, H Emptoz and C.Y Suen, “Extraction of text areas in printed document images”, ACM Symposium on Document Engineering 2001: 157 - 165 [18] T Gandhi, R Kasuturi and S Antani, “Application of Planar Motion Segmentation for Scene Text Extraction”, Proc of International Conference on Pattern Recognition, 2000, Vol 1, pp 445 - 449 [19] J Gao and J Yang, “An Adaptive Algorithm for Text Detection from Natural Scenes”, Proceedings of the 2001 IEEE Conference on Computer Vision and Pattern Recognition, December, 2001 [20] J Gllavata, R Ewerth and B Freisleben, “Text detection in images based on unsupervised classification of high-frequency Wavelet coefficients”, Proc Internal Conference on Pattern Recognition (ICPR'04), pp.425–428, Aug 2004 [21] R.C Gonzalez and R.E Woods, “Digital image processing”, Addison Wesley World Student Series, 1994 [22] Y.M.Y Hasan and L.J Karam, “Morphological Text Extraction from Images”, IEEE Transactions on Image Processing, (11) (2000) 1978 - 1983 [23] H Hase, T Shinokawa, M Yoneda, M Sakai and H Maruyama, “Character String Extraction by Multi-stage Relaxation”, Proc of ICDAR’97, 1997, pp 298 - 302 [24] H Hase, T Shinokawa, M Yoneda and C Y Suen, “Character String Extraction from Color Documents”, Pattern Recognition, 34 (7) (2001) 1349 - 1365 [25] C Hong, G Loudon, Y.M Wu and R Zitserman, “Segmentation and Recognition of Continuous Handwriting Chinese Text”, No 2, March 1998, pp 223 - 232 [26] X.S Hua, X.R Chen, L Wenyin and H.J Zhang, “Automatic Location of Text in Video Frames”, Workshop on Multimedia Information Retrieval (MIR 2001), October 5, Ottawa, Canada, 2001 [27] A.K Jain, “Statistical pattern recognition: a review”, IEEE Transactions on PAMI (2001) 4–37 [28] K Jung, “Neural network-based Text Location in Color Images”, Pattern Recognition Letters, 22 (14) December (2001) 1503 - 1515 [29] K Jung, K.I Kim and A.K Jain, "Text information extraction in images and video: A survey," Pattern Recognition, vol.37, no.5, pp.977 - 997, 2004 [30] Kamat, Varsha and Ganesan, “An Efficient Implementation of the Hough Transform for Detecting Vehicle License Plates Using DSP.S.”, Proceedings of Real-Time Technology and Applications, pp.58 - 59, 1995 [31] A Kertesz, V Kertesz and T Muller, “An on-line image processing system for registration number identification”, IEEE International Conference on Neural Networks, vol 6, pp 4145-4148, 1994 60 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen [32] M Khalil and M Bayoumi, “A Dyadic Wavelet Affine Invariant Function for 2-D Shape Recognition”, IEEE Trans Pattern Analysis and Machine Intelligence, vol 23, no 10, pp 1152-1164, Oct 2001 [33] K I Kim, K Jung, S H Park and H J Kim, “Support Vector Machine-based Text Detection in Digital Video”, Pattern Recognition, 34 (2) (2001) 527-529 [34] O Lezoray, A Elmoataz, H Cardot and M Revenu, “A color morphological segmentation”, CGIP, pp 170-175, oct, 2000 [35] H Li, D Doermann and O Kia, “Automatic text detection and tracking in digital video”, IEEE Transactions on Image Processing (2000) 147–156 [36] H Li, O Kia and D Doermann, “Text Enhancement in Digital Video”, Proc of SPIE, Document Recognition IV, 1999, pp - [37] C.W Liang and P Chen, “DWT-Based Text Localization”, International Journal of Applied Science and Engineering, 2004.2, 1:105-116 [38] R Lienhart, “Automatic text recognition for video indexing,” in Proc ACM Multimedia 96, Boston, MA, Nov 1996, pp 11 - 20 [39] R Lienhart and W Effelsberg, “Automatic text segmentation and text recognition for video indexing”, Multimedia Syst., vol 8, pp 69 - 81, Jan, 2000 [40] R Lienhart and F Stuber, “Automatic Text Recognition In Digital Videos”, Proc of SPIE, 1996, pp 180 - 188 [41] R Lienhart and A Wernicke, “Localizing and segmenting text in images and videos”, IEEE Trans Circuits Syst Video Technol., vol.12, no.4, pp.256–268, 2002 [42] F Lisa, J Carrabina, C Perez-Vicente, N Avellana and E Valderrama, “Two-bit weights are enough to solve vehicle license number recognition Problem”, IEEE International Conference on Neural Networks, vol 3, pp.1242-1246, 1993 [43] W Y Liu and D Dori, “A Proposed Scheme for Performance Evaluation of Graphics/Text Separation Algorithm”, Graphics Recognition – Algorithms and Systems, K Tombre and A Chhabra (eds.), Lecture Notes in Computer Science, 1998, Vol 1389, pp 359-371 [44] Y Liu, Y Luo, F Liu and Z Qiu, “A novel approach of segmenting touching and kerned characters”, Proc 8th International Conference on Neural Information Processing, vol.3, pp.1603–1606, Oct 2001 [45] Y Liu, Z You, L Cao and X Jiang, “Vehicle detection with projection histogram and type recognition using hybrid neural networks”, Networking, Sensing and Control, 2004 IEEE International Conference on Volume 1, Issue , 21-23 March 2004 Page(s): 393 – 398 Vol.1 [46] Y Lu, B Haist, L Harmon, J Trenkle and R.Vogt, “An accurate and efficient system for segmenting machine-printed text”, U.S.postal service 5th Advanced Technology Conference, Washington D.C., 1992, November, Vol 3: A-93-A-105 61 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen [47] S.G Mallat, “A Theory for Multiresolution Signal Decomposition: The Wavelet Representation”, IEEE.Transactions on Pattern Analysis and Machine Intelligence, Vol.11, 1989, 674-693 [48] S.G Mallat, “A Wavelet Tour of Signal Processing”, Academic Press, San Diego 1998 [49] W Mao, F Chung, K Lanm and W Siu, “Hybrid Chinese / English Text Detection in Images and Video Frames”, Proc of International Conference on Pattern Recognition, 2002, Vol 3, pp 1015 - 1018 [50] J Ohya, A Shio and S Akamatsu, “Recognizing Characters in Scene Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 16 (2) (1994) 214 - 224 [51] R Parisi, R Di Claudio, E.D Lucarelli and G Orlandi, “Car plate recognition by neural networks and image processing”, Proceedings of the 1998 IEEE International Symposium on Circuits and Systems, vol 3, pp.195-198, 1998 [52] S.H Park, K.I Kim, K Jung and H.J Kim, “Locating Car License Plates using Neural Networks”, IEE Electronics Letters, 35 (17) (1999) 1475 - 1477 [53] T Sato, T Kanade, E.K Hughes and M.A Smith, “Video OCR for Digital News Archive”, Proc of IEEE Workshop on Content based Access of Image and Video Databases, 1998, pp 52 - 60 [54] J Shapiro, “Embedded image coding using zerotrees of Wavelet coefficients”, IEEE Transactions on Signal Processing, Vol 41, No 12, pp 3445-3462, Dec 1993 [55] B Sin, S Kim and B Cho, “Locating Characters in Scene Images using Frequency Features”, Proc of International Conference on Pattern Recognition, 2002, Vol 3, pp 489 - 492 [56] M.A Smith and T Kanade, “Video Skimming for Quick Browsing Based on Audio and Image Characterization”, Technical Report CMU-CS-95-186, Carnegie Mellon University, July 1995 [57] C Strouthpoulos, N Papamarkos and A.E Atsalakis, “Text Extraction in Complex Color Document”, Pattern Recognition, 35 (8) (2002) 1743 - 1758 [58] Y Tang, L Yang, J Liu and H Ma, “Wavelet Theory and Its Application to Pattern Recognition”, vol 36 of Machine Perception and Artificial Intelligence, eds World Scientific, 2000 [59] S Tsujimoto and H Asada, “Resolving ambiguity in segmenting touching characters”, in The First International Conference on Document Ananlysis and Recognition, 1991, 701 709 [60] J Villasenor, B Belzer and J Liao, “Wavelet Filter Evaluation for Image Compression”, IEEE Transactions on Image Processing, Vol 2, pp 1053-1060, August 1995 62 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen [61] J Wang and J Jean, “Segmentation of merged characters by neural networks and shortest path”, Pattern Recognition, 1994, 27(5): 649-658 [62] J.C Wu, J.W Hsieh and Y.S Chen, “Morphology-based text line extraction”, Proc of International Computer Symposium, Taipei, Taiwan ROC, (December 4-6, 2006), Accepted, 2006 [63] V Wu, R Manmatha and E.R Riseman, “Finding Text in Images”, Proc of ACM International Conference on Digital Libraries, 1997, pp - 10 [64] V Wu, R Manmatha and E.M Riseman, “TextFinder: An Automatic System to Detect and Recognize Text in Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21 (11) (1999) 1224 - 1229 [65] H C Wu, C S Tsai and C H Lai, “A License Plate Recognition System in EGovernment”, International Journal of Information & Security,Vol 15, No 2, 2004, pp 199-210, 2004, SCI [66] Q Ye, Q Huang, W Gao and D Zhao, “Fast and robust text detection in images and video frames”, Image Vis Comput., vol.23, no.6, pp.565–576, 2005 [67] Y Zhang and C Zhang, “A New Algorithm for Character Segmentation of License Plate” (paper presented at the IEEE Intelligent Vehicles Symposium, Beijing, 9-11 June 2003), 106-109 [68] Y Zhong, H Zhang and A.K Jain, “Automatic Caption Localization in Compressed Video”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, (4) (2000) 385 - 392 [69] Y Zhong, K Karu and A.K Jain, “Locating Text In Complex Color Images”, Pattern Recognition, 28 (10) (1995) 1523 - 1535 [70] http://documents.cfar.umd.edu/LAMP, LAMP database [71] http://documents.cfar.umd.edu/resources/database/UWII.html, UWII database [72] http://www.limsi.fr, Van Lang Civilization website 63 ... tree 17 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Further, the DWT algorithm for two-dimensional pictures is presented similarity The DWT is performed firstly for all... missing) for this stage as the conclusion 41 Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen Fig 4.13 The output image for the text line verification stage 4.4 Character Extraction. .. 42 III Characters Extraction for HanNom Stele Images Student: Ho Xuan Nguyen List of Tables Table 4.1 Choosing thresholds for the coarse detection 27 Table 4.2 The thresholds for CCA to

Ngày đăng: 11/02/2021, 23:16