1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Apply incremental learning for daily activity recognition using low lever sensor data

50 7 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

VIETNAM NATIONAL UNIVERSITY, HANOI UNIVERSITY OF ENGINEERING AND TECHNOLOGY TA VIET CUONG APPLY INCREMENTAL LEARNING FOR DAILY ACTIVITY RECOGNITION USING LOW LEVEL SENSOR DATA MASTER THESIS OF INFORMATION TECHNOLOGY Ha Noi, 2012 VIETNAM NATIONAL UNIVERSITY, HANOI UNIVERSITY OF ENGINEERING AND TECHNOLOGY TA VIET CUONG APPLY INCREMENTAL LEARNING FOR DAILY ACTIVITY RECOGNITION USING LOW LEVEL SENSOR DATA Major: Computer Science Code: 60.48.01 MASTER THESIS OF INFORMATION TECHNOLOGY Supervised by Assoc Prof Bui The Duy Ha Noi, 2012 i A thesis submitted in fulfillment of the requirements for the degree of Master of Computer Science Supervisor: ORIGINALITY STATEMENT ‘I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial proportions of material which have been accepted for the award of any other degree or diploma at University of Engineering and Technology or any other educational institution, except where due acknowledgement is made in the thesis I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the project’s design and conception or in style, presentation and linguistic expression is acknowledged.’ Signed ii Abstract Daily activity recognition is an important task of many applications, especially in an environment like smart home The system needs to be equipped with the recognition ability so that it can automatically adapt to resident’s preferences So far, a great deal number of learning models has been proposed to classify activities However, most of them only work well in off-line manner when training data is known in advance It is known that people’s living habits change over time, therefore a learning technique that should learn new knowledge when there is new data in great demand In this thesis, we improve an existing incremental learning model to solve this problem The model, which is traditionally used in unsupervised learning, is extended for classification problems Incremental learning strategy by evaluating the error classification is applied when deciding whether a new class is generated A separated weighted Euclid is used to measure the distance because the large variance of information contains in feature vectors To avoid constant allocation of new nodes in the overlapping region, an adaptive insertion constraint is added Finally, an experiment is carried to assess its performance The results show that the proposed method is better than the previous one The proposed method can be integrated into a smart system, which then pro-actively adapts itself to the changes of human daily activity pattern Acknowledgements I would like to express my respect and appreciation to my supervisor, Associate Professor Bui The Duy He is the person to guide me the approach of the thesis’s problem He has given me a lot of supports during the progress of my thesis including how to a better experiment and to write a good thesis Next, I would like to thank Dr Vu Thi Hong Nhan for her valuable discussions about the details aspects of my thesis, and also for her recommendations for providing a background of my research I also want to thank for my colleagues and friends in Human Machine Laboratory for their friendly and willingness to help me during the time I studied ii Publications Multi-Agent Architecture For Smart Home, Viet Cuong Ta, Thi Hong Nhan Vu, The Duy Bui, The 2012 International Conference on Convergence Technology January 26-28, Ho Chi Minh, Vietnam A Breadth-First Search Based Algorithm for Mining Frequent Movements From Spatiotemporal Databases Thi Hong Nhan Vu, The Duy Bui, Quang Hiep Vu, Viet Cuong Ta, The 2012 International Conference on Convergence Technology January 26-28, Ho Chi Minh, Vietnam Online learning model for daily activity recognition, Viet Cuong Ta, The Duy Bui, Thi Hong Nhan Vu, Thi Nhat Thanh Nguyen, Proceedings of The Third International Workshop on Empathic Computing (IWEC 2012) iii Table of Contents Introduction 1.1 Overview 1.2 Our Works 1.3 Thesis Outline 2 Related Work 2.1 Daily Activity Recognition in Smart Home System 2.2 Daily Activity Recognition approach 2.3 Incremental learning model 5 Framework for Activity 3.1 Data Acquisition 3.2 Data annotation 3.3 Feature extraction 3.4 Segmentation Recognition in the Home Environment Growing Neural Gas model 4.1 GNG structure 4.1.1 Competitive Hebbian Learning 4.1.2 Weights Adapting 4.1.3 New Node Insertion 4.2 Using GNG network for supervised learning 4.2.1 Separated Weighted Euclid 4.2.2 Reduce Overlapping Regions by Using a Local 10 10 12 12 14 Error Threshold 15 15 17 18 19 23 24 25 Radial Basic Function network 27 5.1 Standard Radial Basic Function 27 5.2 Incremental Radial Basic Function 29 iv TABLE OF CONTENTS v Experiment and Result 31 Conclusion 35 7.1 Summary 35 7.2 Future works 36 List of Figures 3.1 3.2 Framework for activity recognition 11 An example of markov model for observing an activity Each state has a normal distribution of time 13 4.1 4.2 An example of Voronoi tesselation of the space The lines represent the boundary of the subspaces and the nodes denote the centres of the subspaces The dot line represents the edges 16 An example of updating weight in GNG The bold lines represent the boundary of the subspaces and the thin lines nodes denote the connections between the nodes The dot lines represents direction which the nodes is moved 19 5.1 The structure of a Radial Basic Function network vi 28 4.2 Using GNG network for supervised learning 26 There are several ways to reduce the error in the overlapping regions We can restrict the size of the network However, this approach faces the same difficult in SOM model (Kohonen, 1989) because it is hard to determine how many nodes are suitable for a class Moreover, in incremental learning, we not know how many samples of each class will be provided Another approach is to use a different criterion for adding new nodes A threshold of the local error counter e can be used for inserting new nodes (Hamker, 2001) The threshold is associated with each node If there is a node has local error counter e exceed the threshold, the network will be extended in the region of e The threshold is updated during the learning process They also introduce the stopping criterion which uses an estimate of average local error We extend this approach to reduce the overlapping regions between networks Each node u is associated with an adding threshold tu At the beginning, tu is assigned a value which is the same for all nodes Then, it is updated during the learning process A high value of tu means the region of u should not be inserted more nodes The value tu will increase if the region of u generates a false classifying A false classifying is defined as a map from an input vector x to a region which is not belong to its class We update tu as follows: Init tu = T which is the starting adding threshold Let (x, y) is the input vector and its label of a training sample Find the nearest network Gi and the node u of Gi has the smallest distance to x: i = argmin(d(x, Gj )) j∈[1 L] s1 = argmin(d(x, v)) u∈Ai If there is a false prediction i = y, then update the adding criterion ts1 : ts1 = (1 + β) ∗ ts1 If an input vector x is mapped to s1 , we can rewrite the insertion criterion using the local error es1 and the error threshold ts1 New node will be inserted into the region around s1 if es1 > ts1 Instead of finding a node q with maximum local error, we expand the node s1 in this case Chapter Radial Basic Function network The Radial Basic Function (RBF) network (Moody & Darken, 1989) is a supervised learning method In section 5.1, we present the basic structure of a RBF network In section 5.2, we present an incremental learning version of RBF network based on the GNG network 5.1 Standard Radial Basic Function Typically, RBF network typically contains two layers: the hidden layer and the output layer (Figure 5.1) Normally, the hidden layer represents a set of local neurons which is similar to the cells of growing neural gas Each neuron has a weight vector indicates its position in the input space The neuron will be activated if the input vector is closed enough to it, by a Gaussian activation function The neutrons in the hidden layer are connected to the neurons in the output layer to to create a structure similar to a perceptron layer The main purpose of the perceptron layer is to produce a linear combination of the activation of each neuron in the hidden layer Each neuron in the hidden layer cover a specific area in the input space The distance from an input vector to each neuron is calculated by using Euclid distance In learning step, by adjusting the weight of neurons, the neurons which are activated by a given input can be controlled Training a RBF network can be combined from an unsupervised learning and a supervised one The positions of hidden neurons in the hidden layer can be found by using a cluster algorithm such as k-means After that, the weight connect from the hidden layer to the output layer can be found by 27 5.1 Standard Radial Basic Function 28 Figure 5.1: The structure of a Radial Basic Function network using a perceptron network with a gradient descent method Besides, the two steps can be done separately Let the neurons in hidden layer have weight vectors wu in the input space V Then, the activation function of an input vector x from the neuron u is: g(x, wu , θ) = e− ||x−wu ||2 2θ (5.1) where ||x − wu || is the Euclid distance in the input space and θ is a parameter of Gaussian function The θ parameters controls the width of the Gaussian function and is an important factor to the activations of each neuron Normally, it is fixed for all neurons u in the hidden layer It is chosen by considering the distance between neurons in the input space and the number of neurons in the hidden layer The Gaussian function g(x, wu , θ) can be normalised to guarantee that at least one neuron firing: g (x, wu , θ) = e− ||x−wu ||2 2θ − ue ||x−wu ||2 2θ (5.2) In this case, the nearest neuron of the input vector x always activate even if the input vector is outside the width of receptive field of the neuron The output vector o is then defined as the sum of activation function: oi = aiu ∗ g(x, wu , θ) (5.3) where is the weight vector connect the neuron i in the output layer to the neuron i in the input layer Then, the error between the output vector o and the 5.2 Incremental Radial Basic Function 29 target vector y is computed as: ∆= (yi − oi )2 (5.4) i Here, gradient decent method is used to minimize the error by adjusting weight vector aiu : aiu = aiu + η ∗ (yi − oi ) ∗ g (x, wu , θ) (5.5) The η parameter is the learning rate and is fixed during the learning process The criterion to stop training is the error is low enough or no significant improvement is made or to use a separated validation set 5.2 Incremental Radial Basic Function In the tradition RBF network model, one of its weaknesses is the fixed number of neurons in the hidden layer It is hard to determine how many neurons are appropriate for the input space of the problems, especially in life-long learning tasks Furthermore, the approach of having fixed neurons in the hidden layer can face difficult in cover the whole space if the dimension of the input space becomes large Using a similar approach as growing gas model, an incremental version of Radial Basic Function can be trained without knowing the number of neurons in the hidden layer The main difficult in the incremental approach is to compute the weight of edges connect the hidden layer to the output layer In the tradition approach, a layer of perceptrons is built and the gradient descent method is used to train until the error converge Unfortunately, we can not apply this to the incremental approach because we not know which the set of inputs is fed to the network Besides that, when a node is added to the hidden layer, it must be connect to the output layer This can affect the already trained weights The criterion to stop in gradient descent training is not easy to define here because we not have the addition validation set or refer back the error of previous input samples However, Fritzke proposed to use only one step of gradient descent instead of running gradient descent until the network converges Let L is the number of classes, G is the GNG network of the hidden layer, and O is the output layer which contains L neurons Each neuron oi in O is associate 5.2 Incremental Radial Basic Function 30 with a weight vector connects the i the neuron with all nodes in G During the learning process, the size of will be increased when new nodes are added to G The algorithm of an incremental RBF network is described as follows: Init G is empty Init = {−1} which contain only a bias weight for all neurons in the output layer O Let (x, y) is the input vector and its label using one of L encoding Adapt the represent of the input space in G by x If there is a new node insertion, add a new elements with random value to all Calculate the Gaussian activation of all the nodes in G and normalise it: g(x, wu , θ) = e−||x−w u g (x, w , θ) = u ||2 /2θ g(x, wu , θ) u u∈A g(x, w , θ) Calculate the output activation and the error: oi = aiu ∗ g (x, wu , θ) ∆yi = yi − oi Adjust the weight of output neuron i by ∆ai : ∆ai = η ∗ (yi − oi ) ∗ g (x, wu , θ) Chapter Experiment and Result In this experiment, we apply the above incremental learning algorithm for daily activity recognition The activities are segmented and annotated We use dataset from the WSU CASAS smart home project (Cook & Schmitter-Edgecombe, 2009) in our experiment This dataset is sensor data, which was collected from a house of a volunteer adult in months The residents in the home were a woman and a dog The woman’s children visited on several occasions There are 33 sensors, which include motion sensors, door sensors and temperature sensors A motion sensor is used to discover the motion, which happens around its location and takes ”ON” or ”OFF” value A door sensor is used to recognize opening and closing door activity and takes ”OPEN” or ”CLOSE” value A temperature sensor is used to sense a changing in temperature for a specific threshold and takes real value The number of sensor events is roughly 400,000 events A large part of data set is annotated with the beginning and ending of activities The number of annotated activities is 2310 and the number of different types is 15 Some most frequent appeared activities in the dataset are given in table 6.1 The ”Master” and ”Guest” prefix are used to differ the same activity label of the user and the visitors We run our experiment with four described model: Fuzzy ARTMAP, incremental RBF (RBF ), the traditional growing neural gas model (GNG) and the improve growing neural gas model(new GNG) In RBF, GNG, the parameters of adapting weight b , n are set to 0.2 and the adding criterion λ is set to 10 Both models are not restricted in size The learning rate of RBF η is set to 0.15 and the size of the Gaussian activation is set to 4.5 In 31 32 Table 6.1: Summary of datasets Activity Number of samples Kitchen Activity 554 Guest Bathroom 330 Read 314 Master Bathroom 306 Leave Home 214 Master Bedroom Activity 117 Watch TV 114 Table 6.2: Accuracy of the four models Training train1 train2 train3 Fuzzy ARTMAP 68.74 % 72.12 % 72.03% RBF 58.79 % 61.30 % 57.40% GNG 70.56 % 71.17% 72.47% new GNG 72.55% 74.81% 77.58% the new GNG, b , n is the same as the two models The default adding criterion T is set to 25.0 and the adapting local threshold β is set to 0.1 The two parameters λ and T are chosen with the purpose to make the two models GNG and new GNG have a similar number of nodes during learning The Fuzzy ARTMAP model (Carpenter et al., 1992) is used as a base-line model for comparison The default vigilance parameter ρ is set to 0.75 The data set is randomly divided into two equal parts: train and test The first part is used to train the models and the second part is used for evaluate Each part has around 1100 samples To simulate an incremental learning process in the real system, the train part is further divided into three equal parts train1 , train2 , train3 These part are used to train the models incrementally More specifically, there are three training phases In the first training phase, the part train1 is used Each sample in the train1 part is fed only once in a random order to the model Then, in the second phase, we use samples in the train2 phase to train the model which is received after the first phase Next, we use samples in train3 to train the model from the second phase In each phase, the model is tested on the test data The results are presented in the table 6.2 It can be seen clearly from the table that the Fuzzy ARTMAP, GNG, new GNG 33 improve their accuracy when they are provided more training data In the three models, the new GNG has the highest improvements and reaches the best accuracy at 77.58% in the third training phase Only the RBF decreases substantially when the train3 is provided In all three training phases, both the Fuzzy ARTMAP and GNG model are approximately equal while the RBF has a lowest figures It can be explained by the different between the RBF model and the other ones The RBF combines the local representation with a perceptron layer However, in incremental learning, training the perceptron layer until its converges is a difficult task Using one step of gradient descent really decrease the model’s accuracy compare to the other three models Hypercubes representation of Fuzzy ARTMAP model and sphere representation of GNG model result in a similar accuracy However, adding more constraint for controlling the overlapped areas in new GNG may increase the accuracy fairly (5% with the train3 data) Table 6.3 provides the F-measure score of four models in the third training phase In the top half of the table, there are a large number of training samples Three models Fuzzy ARTMAP, GNG, new GNG have quite performance Their F-measure scores are not too much different in these activity classes This can be explained by the local representation appoach of the three models Having a large number of training samples results in a good representation of the specific input space Thus, it can recognize the specific activities better Among the three models, the new GNG has the most top scores In the bottom half of the table, there are much less training samples than the other half The performance of the three models decrease moderately Particularly, the ”Chores” and ”Dining Activity” have very low F-measure score In the remaining activity classes, the new GNG’s performance is average It has score between 0.5 to 0.7 The GNG and Fuzzy ARTMAP ’s performance are lower To provide more details about the capability to separate different classes, the table 6.4, 6.5, 6.6 compares confusion matrix of two activity labels between three models based on the GNG network The two activities ”Leave Home” and ”Read” both have a large number of training samples However, the patterns learn by the GNG-based model of the two activities are overlapped The RBF model has 22 wrong classifying samples There are 19 samples of Leave Home label are classified a Read label with the GNG model while there are a total of wrong classifying samples with the new GNG 34 Table 6.3: The F-measure score of four models in the third training phase The number in the brackets is the number of training samples of each class The activity labels are arranged in the decreasing order of the number of training samples Activity Fuzzy ARTMAP RBF GNG new GNG Kitchen Activity(304) 0.82 0.78 0.88 0.88 Guest Bathroom(166) 0.84 0.89 0.92 0.90 Read (151) 0.79 0.60 0.79 0.85 Master Bathroom (136) 0.68 0.29 0.64 0.77 Leave Home (117) 0.79 0.80 0.82 0.88 Master Bedroom Activity (63) 0.59 0.37 0.50 0.54 Watch TV (56) 0.55 0.46 0.54 0.59 Sleep (46) 0.73 0.31 0.69 0.76 Bed to Toilet (33) 0.50 0.28 0.26 0.47 Desk Activity (26) 0.5 0.35 0.63 0.47 Morning Meds (21) 0.38 0.62 0.44 0.71 Dining Activity (12) 0.0 0.00 0.12 0.11 Chores (8) 0.0 0.00 0.00 0.08 Eve Meds (8) 0.11 0.00 0.25 0.73 Meditate (8) 0.36 0.00 0.57 0.57 Table 6.4: The confusion matrix of the incremental RBF model between activity Leave Home and Read Label Leave Home Read Leave Home 87 Read 22 73 Table 6.5: The confusion matrix of GNG model between activity Leave Home and Read Label Leave Home Read Leave Home 93 Read 19 116 Table 6.6: The confusion matrix of new GNG model between activity Leave Home and Read Label Leave Home Read Leave Home 88 Read 140 Chapter Conclusion 7.1 Summary In this thesis, we presented an incremental approach for daily activity recognition in smart home system The main focus of the problem is to classify the data comes from low level sensor into different type of activities Although there are some certain advantages of using the low level sensory data in activity recognition, its main drawback is the large variance of the activity patterns The proposed approach is based on an unsupervised learning method, namely GNG network The GNG network has a similar structure like the SOM network but it has more flexible structure New nodes and connections can be added to the GNG network during the learning process We extend the GNG network for supervised problem in two ways: using multiple networks and creating an incremental version of RBF network The experiment is carried out on real life data set While the incremental version of RBF network suffers from weight changing, the multiple GNG networks approach has quite good performance in comparison to the Fuzzy ARTMAP model By changing the metric of distance measurement and preventing the constant inserting of new nodes in the overlapping region, the improved version can separated different activity classes well 35 7.2 Future works 7.2 36 Future works In the future, we are planing to employ more efficient method for feature extraction from the sequence of sensor events The method describing in our thesis depends largely on the space properties of the activity patterns It does not use temporal information which are presented in the sequence of sensor events Therefore, it can have difficulties in classifying the activities which usually happen in the same area In addition to that, because the feature vector combines different types of category data, finding a good distance metric in the input space is difficult Bibliography Anastasi, G., Conti, M., Falchi, A., Gregori, E., & Passarella, A (2004) Performance measurements of motes sensor networks In MSWiM 04: Proceedings of the 7th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems (pp 174–181) ACM Press Brdiczka, O., Maisonnasse, J., & Reignier, P (2005) Automatic detection of interaction groups Proceedings of the 7th international conference on Multimodal interfaces (pp 32–36) New York, NY, USA: ACM Carpenter, G A., Grossberg, S., Markuzon, N., Reynolds, J H., & Rosen, D B (1992) Fuzzy artmap: A neural network architecture for incremental supervised learning of analog multidimensional maps IEEE Transactions on Neural Networks, 3, 698–713 Chen, C., Das, B., & Cook, D (2010) A data mining framework for activity recognition in smart environments Intelligent Environments IE 2010 Sixth International Conference on, 80–83 Cook, D J., Augusto, J C., & Jakkula, V R (2009) Ambient intelligence: Technologies, applications, and opportunities Pervasive and Mobile Computing, 5, 277–298 Cook, D J., & Schmitter-Edgecombe, M (2009) Assessing the quality of activities in a smart environment Methods of Information in Medicine, 48, 480–485 Cover, T., & Hart, P (1967) Nearest neighbor pattern classification IEEE Transactions on Information Theory, 13, 21–27 Doctor, F., Hagras, H., & Callaghan, V (2005) A fuzzy embedded agent-based 37 Bibliography 38 approach for realizing ambient intelligence in intelligent inhabited environments IEEE Transactions on Systems, Man, and Cybernetics, Part A, 35, 55–65 Freund, Y., & Schapire, R (1995) A desicion-theoretic generalization of on-line learning and an application to boosting Computational learning theory, 904, 23– 37 Fritzke, B (1993) Growing cell structures - a self-organizing network for unsupervised and supervised learning Neural Networks, 7, 1441–1460 Fritzke, B (1995) A growing neural gas network learns topologies Advances in Neural Information Processing Systems (pp 625632) MIT Press Gottfried, B., Guesgen, H W., & Hă ubner, S (2006) Designing smart homes chapter Spatiotemporal reasoning for smart homes, 16–34 Berlin, Heidelberg: SpringerVerlag Hamker, F H (2001) Life long learning cell structures continuously learning without catastrophic interference Neural Netw., 14, 551–573 Hastie, T., & Tibshirani, R (1996) Discriminant adaptive nearest neighbor classification IEEE Trans Pattern Anal Mach Intell., 18, 607–616 Helal, S., Mann, W., El-Zabadani, H., King, J., Kaddoura, Y., & Jansen, E (2005) The gator tech smart house: A programmable pervasive space Computer, 38, 50–60 Joshi, P., & Kulkarni, P (2012) Incremental learning: Areas and methods a survey International Journal of Data Mining and Knowledge Management Process, Kohonen, T (1989) Self-organization and associative memory: 3rd edition New York, NY, USA: Springer-Verlag New York, Inc Kruschke, J K (1992) Alcove: an exemplar-based connectionist model of category learning Psychological Review, 99, 2244 Lă uhr, S., West, G., & Venkatesh, S (2007) Recognition of emergent human behaviour in a smart home: A data mining approach Pervasive Mob Comput., 3, 95–116 Bibliography 39 Macqueen, J B (1967) Some Methods for classification and analysis of multivariate observations Procedings of the Fifth Berkeley Symposium on Math, Statistics, and Probability (pp 281–297) University of California Press Madden, S., & Franklin, M J (2002) Fjording the stream: An architecture for queries over streaming sensor data Proceedings of the 18th International Conference on Data Engineering (pp 555–) Washington, DC, USA: IEEE Computer Society Martinetz, T (1993) Competitive Hebbian Learning Rule Forms Perfectly Topology Preserving Maps Proc ICANN’93, Int Conf on Artificial Neural Networks (pp 427–434) London, UK: Springer Martinetz, T., & Schulten, K (1991) A neural-gas network learns topologies Artificial Neural Networks, 1, 397–402 Moody, J., & Darken, C J (1989) Fast learning in networks of locally-tuned processing units Neural Comput., 1, 281–294 Mă uhlenbrock, M., Brdiczka, O., Snowdon, D., & Meunier, J.-L (2004) Learning to detect user activity and availability from a variety of sensor data Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom’04) (pp 13–) Washington, DC, USA: IEEE Computer Society Polikar, R., Byorick, J., Krause, S., Marino, A., & Moreton, M (2002) Learn++: a classifier independent incremental learning algorithm for supervised neural networks Rashidi, P., & Cook, D J (2009) Keeping the resident in the loop: Adapting the smart home to the user Rashidi, P., & Cook, D J (2010) Mining sensor streams for discovering human activity patterns over time Time, 431–440 Rashidi, P., Cook, D J., Holder, L B., & Schmitter-Edgecombe, M (2011) Discovering activities to recognize and track in a smart environment IEEE Transactions on Knowledge and Data Engineering, 23, 527–539 Bibliography 40 Tapia, E M., Intille, S S., & Larson, K (2004) Activity recognition in the home using simple and ubiquitous sensors Pervasive Computing, 3001, 158–175 Van Kasteren, T., Noulas, A., Englebienne, G., & Krse, B (2008) Accurate activity recognition in a home setting Proceedings of the 10th international conference on Ubiquitous computing UbiComp 08, 344, Wren, C R., & Tapia, E M (2006) Toward scalable activity recognition for sensor networks Networks, 3987, 168–185 Youngblood, G M., Cook, D J., & Holder, L B (2005) Managing adaptive versatile environments Pervasive Mob Comput., 1, 373–403 ... TECHNOLOGY TA VIET CUONG APPLY INCREMENTAL LEARNING FOR DAILY ACTIVITY RECOGNITION USING LOW LEVEL SENSOR DATA Major: Computer Science Code: 60.48.01 MASTER THESIS OF INFORMATION TECHNOLOGY Supervised... rule for learning and classifying Chapter Framework for Activity Recognition in the Home Environment In this chapter, we present the framework of daily activity recognition using low level sensor. .. 2.1 Daily Activity Recognition in Smart Home System 2.2 Daily Activity Recognition approach 2.3 Incremental learning model 5 Framework for Activity

Ngày đăng: 29/09/2020, 19:37

Xem thêm: