1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Improving learning rule for fuzzy associative memory with combination of content and association

6 213 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Cấu trúc

  • Improving learning rule for fuzzy associative memory with combination of content and association

    • Introduction

    • Background and motivation

      • Fuzzy associative memory models

      • Motivation

    • Generalized association-content associative memory for grey scale patterns

    • Properties of association-content associative memory

      • Experiments with number dataset

      • Experiments with Corel dataset

    • Conclusion

    • Acknowledgements

    • References

Nội dung

Neurocomputing 149 (2015) 59–64 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Improving learning rule for fuzzy associative memory with combination of content and association The Duy Bui n, Thi Hoa Nong, Trung Kien Dang Human Machine Interaction Laboratory, University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam art ic l e i nf o a b s t r a c t Article history: Received July 2013 Received in revised form 14 September 2013 Accepted January 2014 Available online August 2014 FAM is an associative memory that uses operators of fuzzy logic and mathematical morphology (MM) FAMs possess important advantages including noise tolerance, unlimited storage, and one pass convergence An important property, deciding FAM performance, is the ability to capture content of each pattern, and association of patterns Existing FAMs capture either content or association of patterns well, but not both of them They are designed to handle either erosive or dilative noise in distorted inputs but not both Therefore, they cannot recall distorted input patterns very well when both erosive and dilative noises are present In this paper, we propose a new FAM called content-association associative memory (ACAM) that stores both content and association of patterns The weight matrix is formed with the weighted sum of output pattern and the difference between input and output patterns Our ACAM can handle inputs with both erosive and dilative noises better than existing models & 2014 Elsevier B.V All rights reserved Keywords: Fuzzy associative memory Noise tolerance Pattern associations Introduction Associative memories (AMs) store pattern associations and can retrieve desired output pattern upon presentation of a possibly noisy or incomplete version of an input pattern They are categorized as auto-associative memories and heteroassociative memories A memory is said to be auto-associative if the output is the same as the input On the other hand, the memory is considered hetero-associative if the output is different from the input The Hopfield network [1] is probably the most widely known auto-associative memory at present with many variations and generalizations Among different kinds of associative memories, fuzzy associative memories (FAMs) belong to the class of fuzzy neural networks, which combine fuzzy concepts and fuzzy inference rules with the architecture and learning of neural networks Input patterns, output patterns, and/or connection weights of FAMs are fuzzy-valued Working with uncertain data is the reason why FAMs have been used in many fields such as pattern recognition, control, estimation, inference, and prediction For example, Sussner and Valle used the implicative FAMs for face recognition [2] Kim et al predicted Korea stock price index [3] Shahir and Chen inspected the quality of soaps on-line [4] Wang and Valle detected pedestrian abnormal behaviour [5] Sussner and Valle predicted the Furnas reservoir from 1991 to 1998 [2] n Corresponding author E-mail address: duybt@vnu.edu.vn (T.D Bui) http://dx.doi.org/10.1016/j.neucom.2014.01.063 0925-2312/& 2014 Elsevier B.V All rights reserved Kosko's FAM [6] in the early 1990s has initiated research on FAMs For each pair of input X and output Y, Kosko's FAM stores their association as the fuzzy rule “If x is X then y is Y” in a separated weight matrix called FAM matrix Thus, Kosko's overall fuzzy system comprises several FAM matrices Therefore, the disadvantage of Kosko's FAM is very low storage capacity In order to overcome this limitation, different improved FAM versions have been developed that store multiple pattern associations in a single FAM matrix [7–10] In Chung and Lee's model [7] which generalizes Kosko's one, FAM matrices are combined with a max-t composition into a single matrix It is shown that all outputs can be recalled perfectly with the single combined matrix if the input patterns satisfy certain orthogonality conditions The fuzzy implication operator is used to present associations by Junbo et al [9], which improves the learning algorithm for Kosko's max–min FAM model By adding a threshold at recall phase, Liu has modified Junbo's FAM in order to improve the storage capacity [10] Recently, Sussner and Valle has established implicative fuzzy associative memories (IFAMs) [2] with implicative fuzzy learning This can be considered as a class of associative memories that grew out of morphological associative memories [11] because each node performs a morphological operation Sussner and Valle's models work quite well in auto-associative mode with perfect input patterns, similar to other improvements of Kosko's model However, these models suffer much from the presence of both erosive and dilative noises In binary mode, many associative memory models show their noise tolerance capability from distorted input based on their own mathematical characteristics [10,2] For example, models 60 T.D Bui et al / Neurocomputing 149 (2015) 59–64 using maximum operation when forming the weight matrix are excellent in the presence of erosive noise (1 to 0) while the models using minimum operation are ideal for dilative noise (0 to 1) [11] On the other hand, models with maximum operation cannot recover well patterns with dilative noise and models with minimum operation cannot recover well patterns with erosive noise In grey scale or fuzzy valued mode, even though existing models can recover main parts of the output pattern, noisy parts of the input pattern affect seriously to the recalled output pattern Threshold is probably the most effective mechanism so far to deal with this problem However, the incorrectly recalled parts in the output are normally fixed with some pre-calculated value based on the training input and output pairs Clearly, there are two main ways to increase noise tolerance capability of the associative memory models, which are recovering from the noise and reducing the effect of the noise Existing models concentrate on the first way The work in this paper is motivated from the second way, which is how to reduce the effect of noisy input patterns to the recalled output patterns We propose our work based on the implicative fuzzy associative memories [2], which also belong to the class of morphological associative memories [11] Instead of using only rules to store the associations of the input and output patterns, we also add a certain part of the output patterns themselves in the weight matrix Depending on the ratio of association and content of output patterns in the weight matrix, the effect of noise in the distorted input patterns onto the recalled output patterns can be reduced Obviously, incorporating the content of the output patterns would influence the output selection in the recall phase However, the advantages from the tradeoff are worth to consider We have conducted experiments in recalling images from the number dataset and the Corel dataset with both erosive and dilative noises to confirm the effectiveness of our model when dealing with noise The rest of the paper is organized as follows Section presents background on fuzzy associative memory models We also present in this section motivational analysis for our work In Section 3, we describe in detail our model Section presents analysis on the properties of the proposed model and experiments to illustrate these properties Background and motivation 2.1 Fuzzy associative memory models The objective of associative memories is to recall a predefined output pattern given the presentation of a predefined input pattern Mathematically, the associative memory can be defined as a mapping G such that for a finite number of pairs fðAξ ; B ị; ẳ 1; ; kg: GA ị ẳ Bξ : ð1Þ The mapping G is considered to have the ability of noise tolerance if GðA0ξ Þ is equal to Bξ for noisy or incomplete version A0ξ of Aξ The memory is called auto-associative memory if the pattern pairs are in the form of fðAξ ; Aξ Þ; ξ ¼ 1; …; kg The memory is hetero-associative if the output Bξ is different from the input Aξ The process of determining G is called learning phase, and the process of recalling Bξ using G with the presentation of Aξ is called recall phase When G is described by a fuzzy neural network, and the patterns Aξ and Bξ are fuzzy sets for every ξ ¼1,…,k, the memory is called fuzzy associative memory (FAM) The very early FAM models are developed by Kosko in the early 1990s [6], which are usually referred as max–min FAM and maxproduct FAM Both of them are single layer feed-forward artificial neural networks If W A ½0; 1Šmn is the synaptic weight matrix of a max–min FAM and if AA ½0; 1Šn is the input pattern, then the output pattern B A ½0; 1m is computed as follows: B ẳ WM A; 2ị or n Bj ¼ ⋁ W ij Ai ðj ¼ 1…mÞ: ð3Þ i¼1 Similarly, the max-product FAM produces the output 4ị B ẳ WP A or n Bj ẳ W ij :Ai j ẳ 1mị: 5ị iẳ1 For a set of pattern pairs fðAξ ; Bξ Þ : ξ ¼ 1; …; kg, the learning rule used to store the pairs in a max–min FAM, which is called correlation-minimum encoding, is given by the following equation: W ¼ BM AT 6ị or k W ij ẳ Bj Ai 7ị ẳ1 Similarly, the learning rule for the max-product FAM called correlation-product encoding is given by W ¼ B○P AT Chung and Lee generalized Kosko's model by substituting the max–min or the max-product with a more general max-t product [7] The resulting model, called generalized FAM (GFAM), can be described in terms of the following relationship between an input pattern A and the corresponding output pattern B: B ¼ W○T A where W ¼ B○T AT ; ð8Þ and the symbol ○T denotes the max-C product and C is a t-norm This learning rule is referred as correlation-t encoding For these learning rules to guarantee the perfect recall of all stored patterns, the patterns A1 ; …; Ak must constitute an orthonormal set Fuzzy patterns A; B A ½0; 1Šn are said max-t orthogonal if and only if AT ○T B ¼ 0, i.e TðAj ; Bj Þ ¼ for all j ¼ 1; …; n Consequently, A1 ; …; Ak is a max-t orthonormal set if and only if the patterns Aξ and Aη are max-t orthogonal for every ξ a η and Aξ is a normal fuzzy set for every ξ ¼ 1; …; k Some research focused on the stability of FAMs and the conditions for perfectly recalling stored patterns are [11–14] Based on Kosko's max–min FAM, Junbo et al [9] introduced a new learning rule for FAM which allows for the storage of multiple input pattern pairs The synaptic weight matrix is computed as follows: W ẳ BM AT 9ị where the symbol ⊛M denotes the min-IM product Liu proposed a model, which is also known as the max–min FAM with threshold [10] The recall phase is described by the following equation: B ẳ WM A cịị : 10ị The weight matrix W A ½0; 1Šmn is given in terms of implicative learning and the thresholds θ A ½0; 1Šm and c ẳ ẵc1 ; ; cn T A ẵ0; 1Šn are of the following form: k θ ¼ ⋀ B ẳ1 11ị T.D Bui et al / Neurocomputing 149 (2015) 59–64 and cj ¼ ( 61 using both disjunction and implication as follows: ξ ⋀i A Dj ⋀ξ A LEij Bi if Di a ∅ if Di ¼ 12ị where LEij ẳ f : Aj r Bi g and Dj ¼ fi : LEij a ∅g With implicative fuzzy learning, Sussner and Valle established implicative fuzzy associative memories (IFAMs) [2] IFAMs are quite similar to the GFAM model of Chung and Lee in the way that the model is given by a single layer feed-forward artificial neural network with max-T neurons Here, T is a continuous t-norm Different from GFAM, the IFAM model includes a bias term ẳ ẵ0; 1n and uses the R-implicative fuzzy learning rule Given an input pattern A A ½0; 1Šn , the IFAM model produces the following output pattern B A ẵ0; 1m : B ẳ WT Aị ; 13ị where W ẳ BT AT 14ị and k ẳ B : 15ị ẳ1 The fuzzy implication IT is determined by the following equation: I T x; yị ẳ fz A ẵ0; : Tx; zị r yg x; y A ẵ0; 16ị Xiao et al [15] designed a model that applied the ratio of inputto-output patterns for the associations: k W ij ¼ ⋀ minðAξi ; Bj ị ẳ maxAi ; Bj Þ ð17Þ Wang and Lu [16] proposed a set of FAM that used division operator to describe the associations and erosion/dilations for generalizing the associations W ij ¼ DL Bj ; I L Ai ; Bj ịị i ẳ 1n; j ẳ 1mị ẳ Bj ỵ Bj Ai ỵ 1ịị ẳ 4 Bj Ai ỵ 1ị ỵ Bj ị ẳ ỵ Bj ị 2Bj Ai ỵ 1ịị ẳ ỵBj ị 2Bj Ai ỵ 1ị ẳ 2Bj Ai ỵ 1Þ ð23Þ And in the recall phase, the conjunction is used with a multiplication factor of 12: Y j ¼ C L ðW ij ; Ai Þ ðj ¼ 1mị n ẳ W ij ỵ Ai 1ị 2iẳ1 ẳ n 2Bj Ai ỵ 1ị ỵ Ai 1ị 2iẳ1 ẳ n Ai 2Bj ị 2iẳ1 n A 2Bj 2i¼1 i ! n ⋁ A Bj ¼ iẳ1 i ẳ 24ị We name our associative memory as association-content associative memory (ACAM) It can be easy to see that the condition for W to satisfy the equation WA¼ B is that ! n ⋁ A Bj r 25ị iẳ1 i If we relax the condition of W ij A ẵ01, then W ij ẳ 2Bj Ai ỵ 1; 26ị and as a result 2.2 Motivation Yj ¼ Our study is motivated from the question how to reduce the effect of noise in distorted patterns onto the recalled output patterns We base our work on IFAMs [2], particularly with Lukasiewicz fuzzy conjunction, disjunction and implication [17] We now present how to incorporate both association and output content into the weight matrix based on IFAMs Recall that the Lukasiewicz conjunction is defined as C L ðx; yị ẳ x ỵ y 1ị; 18ị Lukasiewicz disjunction is dened as DL x; yị ẳ x ỵ yị; 19ị and Lukasiewicz implication is dened as I L x; yị ẳ y x ỵ 1ị: 20ị If A A ẵ0; 1n is the input pattern, and B A ½0; 1Šm is the output pattern, the learning rule to store the pairs in a Lukasiewicz IFAM using implication is given by the following equation: W ij ẳ I L Ai ; Bj ị i ẳ 1n; j ẳ 1mị ẳ Bj Ai þ 1Þ ð21Þ The recall phase using conjunction is described by the following equation: Y j ¼ C L ðW ij ; Ai ị n j ẳ 1mị ẳ W ij ỵ Ai 1ị n 2Bj Ai ỵ 1ị ỵ Ai 1ị 2iẳ1 ẳ n 2Bj 2iẳ1 ẳ Bj ; 27ị the equation WA ẳB is satisfied naturally For a set of pattern pairs fðAξ ; B ị : ẳ 1; ; kg, the weight matrix is constructed with an erosion operator as follows: k W ij ẳ 2Bj Ai ỵ1 28ị ẳ1 Generalized association-content associative memory for grey scale patterns It is clear that we can remove ỵ1 in the formula of W and À1 in the formula of Y without any effect on our model It can also be seen that our association-content learning rule is similar to morphological associative memory except that there is a multiplication factor of to Bj This actually represents the portion of output content to be added to the weight matrix besides the association represented by Bj À Ai More generally, the weight matrix can be constructed as follows: k 22ị iẳ1 In order to store both association and output content in the weight matrix, we modify the learning rule of Lukasiewicz IFAM W ij ẳ ịBj ỵ Bj Ai ÞÞ ξ¼1 k ¼ ⋀ ðBξj À ηAξi Þ ξ¼1 ð29Þ 62 T.D Bui et al / Neurocomputing 149 (2015) 59–64 where η is a factor to control the ratio between the content and the association to be stored With the η factor, when the input is noisy, the noise will have less effect on the recalled patterns For an input X, the output Y is recalled from W with the equation: m Y j ẳ X i ỵ W ij ị 30ị iẳ1 In order to maintain the ability to store an unlimited number of patterns in the auto-associative case, we keep Wii the same as in MAMs [11]: < k ẳ Bj Ai ịị if i ẳ j 31ị W ij ẳ : k B A ị if i a j i ẳ1 j The equation for recalling is then modified as follows: Y j ẳ X i ỵ W ij ị X i ỵW ij ị 32ị iaj Theorem W in Eq (31) recalls perfectly for all pairs (Aξ , Bξ ) if and only if for each ξ ¼ 1; …; k, each column of matrix W ξ ÀW contains a zero entry Proof W recalls perfectly for all pairs (A , B ) Ai ỵW ij ị Ai ỵ W ij ịị ẳ Bj j ¼ 1; …; m ! Bξj À ð⋁i a j Ai ỵ W ij ị Ai ỵ W ij ị iaj ẳ ẳ 1; …; k ξ and j ¼ 1; …; m ξ Table Auto-associative memory experiment result on the number dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM ! ξ ⋀ ð À ηAi À W ij Þ ð ÀAi À W ij ị Bj ỵ iaj ẳ ẳ 1; …; k ξ and j ¼ 1; …; m ⋀ ðBj À ηAi À W ij Þ ðBξj À Aξi À W ij Þ ξ iaj ¼ ξ ¼ 1; …; k n Error Junbo Kosko Xiao et al IFAM MAM ACAM 0.434 0.914 0.418 0.434 0.434 0.346 and j ¼ 1; …; m W ij W ij ị ẳ ξ ¼ 1; …; k i¼1 Fig Auto-associative memory experiments with the number dataset: the first row contains original training images; the second row contains distorted input images; the third, fourth, fifth and sixth row contains output images from Junbo et al.'s model, Xiao et al.'s model, Sussner and Valle's IFAM, and our ACAM respectively and j ¼ 1; …; m ð33Þ This last set of equations is true if and only if for each ξ ¼ 1; …; k and each integer j ¼ 1; …; m, each row entry of the j-th column of ½W ξ À WŠ contains at least one zero entry □ To compare the effectiveness of our ACAM in handling noise over other well-known associative memories, we have conducted several experiments The six models which we are comparing with are proposed by Junbo et al [18], Kosko [6], Xiao et al [15], Sussner and Valle (IFAMs) [2], and Ritter et al (MAMs) [11] 4.1 Experiments with number dataset Properties of association-content associative memory Similar to MAMs [11] and IFAMs [2], our ACAM converges in one step Moreover, ACAM has unlimited storage capacity, which is given by the next theorem Theorem W in Eq (31) recalls perfectly for all pairs (Aξ , Aξ ) (ξ ¼ 1; …; k) Proof Since W ξjj ¼ Aξj À Aξj ¼ for each j ¼ 1; …; m and all ξ ¼ 1; …; k Hence, for each ξ ¼ 1; …; k, each column of ½W ξ À WŠ contains a zero entry According to Theorem 1, W recalls perfectly for all pairs (Aξ , Aξ ) □ Similar to MAMs and IFAMs, our ACAM can handle erosive noise effectively with dilation operation in the recalling equation However, for MAMs and IFAMs, the noise tolerance capability is good when the number of stored patterns is much smaller than the length of the input vector, which decides the size of the weight matrix W This means that MAMs and IFAMs can correct errors when a large space of storage is waste This reduces the practical usability of MAMs and IFAMs Our ACAM can compensate the errors caused by distorted inputs better than MAMs and IFAMs This dataset consists of five  images of numbers from to Using the standard row-scan method, each pattern image is converted into a vector of size 25 With this dataset, the size of the weight matrix W is 25  25, which is used to store patterns of size 25 With this dataset, we perform experiments on distorted input images with both auto-associative and hetero-associative modes The distorted images contain both erosive and dilative noises (salt and pepper noise) All models are implemented with dilation operator in recalling function where applicable The distorted images can be seen in Fig The criterion to evaluate results are the normalized error, which is calculated as follows: ~ Bị ẳ EB; J B~ B J JBJ ð34Þ where B is the expected output pattern, B~ is the recovered pattern, and J Á J is the L2 norm of a vector Table shows the total error of different models when recalling distorted input images in auto-associative mode As can be seen from the table, our ACAM produces the least total error, while Junbo et al.'s model, Xiao et al.'s model, Sussner and Valle's IFAM and Ritter et al.'s MAM produce a similar amount of total error T.D Bui et al / Neurocomputing 149 (2015) 59–64 63 Table Hetero-associative memory experiment result on the number dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM Error Junbo Kosko Xiao et al IFAM MAM ACAM 0.675 0.893 0.793 0.675 0.675 0.652 Fig Some images from the dataset used for the experiments Fig Test patterns generated from input patterns with salt and pepper noise Table Auto-associative memory experiment result on the Corel dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM Error Junbo Kosko Xiao et al IFAM MAM ACAM 0.742 0.867 0.694 0.664 0.664 0.531 Fig Samples from the Corel dataset of which proposed model visually recovers pattern better than other method in auto-associative mode From left to right are patterns salt and pepper noise recovered by Junbo et al.'s model [18], Kosko's model [6], Xiao et al.'s model [15] and Sussner and Valle's IFAM [2], our ACAM model, and the expected result Table Hetero-associative memory experiment result on the Corel dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM Error Junbo Kosko Xiao et al IFAM MAM ACAM 0.795 1.018 0.702 0.624 0.624 0.548 Kosko's model produces the most total error This agrees with what we have mentioned before that Kosko's model cannot even produce perfect result for perfect input in many cases The reason other models produce larger total error than our ACAM model is that these models cannot work well with both erosive and dilative noises while our ACAM has a mechanism to reduce the effect of noise This can be seen more clearly in Fig In hetero-associative mode, the pairs of images to remember are images of and 1, and 2, etc Table shows the total error of different models in this case From the table we can see that our ACAM also produces the least total error It should be noted that when there is no noise or only erosive noise, our model performs slightly worse than IFAMs and MAMs because of the mechanism to reduce the effect of noise In the presence of only dilative noise, Xiao et al.'s model also performs better than our ACAM However, this trade-off is worth to consider because in practice perfect inputs or inputs distorted by erosive noise only are not common 4.2 Experiments with Corel dataset This dataset includes images selected from the Corel database (Fig 2) The test patterns are generated from input patterns by degrading them with salt and pepper noise, both at 25 percent the number of pixels Fig shows some generated test patterns In auto-association mode, 10 images are used The result in auto-association mode, which is presented in Table 3, shows our ACAM's effectiveness in handling salt and pepper noise Fig shows samples in which our method visually improves input pattern more than others Hetero-association mode is tested with 10 pairs of images, in which the input image pattern is different from the output image pattern As in the previous test, input patterns are degraded by salt and pepper noise Table also shows how good our ACAM performs compared to other models in the presence of both erosive and dilative noise Fig visually compares the results of our model to others 64 T.D Bui et al / Neurocomputing 149 (2015) 59–64 Fig From left to right are patterns from the Corel dataset recovered from salt and pepper noise in hetero-associative mode by Junbo et al.'s model [18], Kosko's model [6], Xiao et al.'s model [15] and Sussner and Valle's IFAM [2], our ACAM model, and the expected result Conclusion In this paper, we proposed a new FAM that captures both content and association of patterns While still possessing vital advantages of existing FAMs, our model has better noise tolerance when both erosive and dilative noises are present This is achieved by sacrificing the reduction of performance in special cases (no noise, only erosive or dilative noise) We have conducted experiments on different data sets to prove the efficiency of the proposed FAM The obtained results hint that the improvement in capturing both pattern content and associations can be effective It is noted that the paper is only the first step to show a way of reducing the effect of noise in FAMs There are many ways in which the paper can be extended First of all, mathematical analysis of how the effect of noise is reduced is an interesting problem to solve Secondly, besides combining content of output with association based on IFAMs and MAMs, using this approach with other existing FAMs would be a nice try Finally, it is worth to compare to and integrate with other associative memories besides FAMs, such as associative memories based on discretetime recurrent neural networks [19–21] Acknowledgements This work is supported by Nafosted Research Project no 102.02-2011.13 [14] Q Cheng, Z.-T Fan, The stability problem for fuzzy bidirectional associative memories, Fuzzy Sets Syst 132 (1) (2002) 83–90 [15] P Xiao, F Yang, Y Yu, Max–min encoding learning algorithm for fuzzy maxmultiplication associative memory networks, in: 1997 IEEE International Conference on Systems, Man, and Cybernetics, 1997, pp 3674–3679 [16] S.T Wang, H.J Lu, On new fuzzy morphological associative memories, IEEE Trans Fuzzy Syst 12 (3) (2004) 316–323 [17] W Pedrycz, F Gomide, An Introduction to Fuzzy Sets: Analysis and Design, Complex Adaptive Systems, MIT Press, 1998 [18] F Junbo, J Fan, S Yan, An encoding rule of FAM, in: Singapore ICCS/ISITA '92, 1992, pp 1415–1418 [19] Z Zeng, J Wang, Analysis and design of associative memories based on recurrent neural networks with linear saturation activation functions and time-varying delays, Neural Comput 19 (8) (2007) 2149–2182 [20] Z Zeng, J Wang, Design and analysis of high-capacity associative memories based on a class of discrete-time recurrent neural networks, IEEE Trans Syst Man Cybern Part B: Cybern 38 (6) (2008) 1525–1536 [21] Z Zeng, J Wang, Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates, Neural Netw 22 (2009) 651–657 The Duy Bui got his Bachelor degree in Computer Science from University of Wollongong, Australia in 2000 and his Ph.D in Computer Science from University of Twente, the Netherlands in 2004 He is now working at Human–Machine Interaction Laboratory, University of Engineering and Technology, Vietnam National University, Hanoi His research includes Machine Learning, Human–Computer Interaction, Computer Graphics and Image Processing References [1] J.J Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc Natl Acad Sci U S A 79 (1982) 2554–2558 [2] P Sussner, M.E Valle, Implicative fuzzy associative memories, IEEE Trans Fuzzy Syst 14 (6) (2006) 793–807 [3] M.-J Kim, I Han, K.C Lee, Fuzzy associative memory-driven approach to knowledge integration, in: 1999 IEEE International Fuzzy Systems Conference Proceedings, 1999, pp 298–303 [4] S Shahir, X Chen, Adaptive fuzzy associative memory for on-line quality control, in: Proceedings of the 35th South-Eastern Symposium on System Theory, 2003, pp 357–361 [5] Z Wang, J Zhang, Detecting pedestrian abnormal behavior based on fuzzy associative memory, in: Fourth International Conference on Natural Computation, 2008, pp 143–147 〈http://dx.doi.org/10.1109/ICNC.2008.396〉 [6] B Kosko, Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence, Prentice Hall, Englewood Cliffs, NJ, 1992 [7] F Chung, T Lee, On fuzzy associative memories with multiple-rule storage capacity, IEEE Trans Fuzzy Syst (3) (1996) 375–384 [8] A Blanco, M Delgado, I Requena, Identification of fuzzy relational equations by fuzzy neural networks, Fuzzy Sets Syst 71 (1995) 215–226 [9] F Junbo, J Fan, S Yan, A learning rule for FAM, in: 1994 IEEE International Conference on Neural Networks, 1994, pp 4273–4277 [10] P Liu, The fuzzy associative memory of max–min fuzzy neural networks with threshold, Fuzzy Sets Syst 107 (1999) 147–157 [11] G Ritter, P Sussner, J.D de Leon, Morphological associative memories, IEEE Trans Neural Netw (1998) 281–293 [12] Z Zhang, W Zhou, D Yang, Global exponential stability of fuzzy logical BAM neural networks with Markovian jumping parameters, in: 2011 Seventh International Conference on Natural Computation, 2011, pp 411–415 [13] S Zeng, W Xu, J Yang, Research on properties of max-product fuzzy associative memory networks, in: Eighth International Conference on Intelligent Systems Design and Applications, 2008, pp 438–443 Thi Hoa Nong received the Master of Science in Information Technology from Thainguyen University in 2006 She is pursuing Ph.D degree in Computer Science at University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam She is now a lecturer in Thainguyen University, Vietnam Her current research interests include artificial intelligence, machine learning Trung Kien Dang got M.Sc in Telematics from University of Twente, the Netherlands, in 2003 and received Ph.D in Computer Science from University of Amsterdam, the Netherlands in 2013 His research includes machine learning, 3D model reconstruction and video log analysis ... associations of the input and output patterns, we also add a certain part of the output patterns themselves in the weight matrix Depending on the ratio of association and content of output patterns... ỵ1 in the formula of W and À1 in the formula of Y without any effect on our model It can also be seen that our association -content learning rule is similar to morphological associative memory except... i ẳ 24ị We name our associative memory as association -content associative memory (ACAM) It can be easy to see that the condition for W to satisfy the equation WA¼ B is that ! n ⋁ A Bj r 25ị iẳ1

Ngày đăng: 16/12/2017, 00:05