1. Trang chủ
  2. » Luận Văn - Báo Cáo

Learning lbsn data using graph neural network

67 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 67
Dung lượng 2,62 MB

Nội dung

HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY MASTER THESIS Learning LBSN data using graph neural network HOANG THANH DAT Dat.HT202714M@sis.hust.edu.vn School of Information and Communication Technology Supervisor: Assoc Prof Huynh Quyet Thang Supervisor’s signature School: Information and Communication Technology May 26, 2022 CỘNG HÒA XÃ HỘI CHỦ NGHĨA VIỆT NAM Độc lập – Tự – Hạnh phúc BẢN XÁC NHẬN CHỈNH SỬA LUẬN VĂN THẠC SĨ Họ tên tác giả luận văn: Hoàng Thành Đạt Đề tài luận văn: Học biểu diễn liệu LBSN sử dụng mạng neural đồ thị Chuyên ngành: Khoa học liệu Trí tuệ nhân tạo Mã số SV: 20202714M Tác giả, Người hướng dẫn khoa học Hội đồng chấm luận văn xác nhận tác giả sửa chữa, bổ sung luận văn theo biên họp Hội đồng ngày 28/04/2022 với nội dung sau: 1) Sửa từ viết tắt Asso Prof thành Assoc Prof 2) Thêm bảng danh mục từ viết tắt trước mục Introduction 3) Sửa số lỗi soạn thảo, đặc biệt phần mô tả Hypergraph Convolution (mục 2.2.5), liên quan tới ký hiệu tốn học 4) Thêm ví dụ minh họa mơ tả node degree hyperedge degree phần 2.2.5 5) Bỏ phần 3.1 Introduction trùng lặp nội dung với phần trước đó, thay đoạn mơ tả ngắn nối chương với chương 6) Mô tả kỹ dataset mục 4.1.1 Thêm thống kê số lượng siêu đỉnh, siêu cạnh đồ thị giải thích giá trị 168 7) Thêm mục 4.1.2 Implementation, mơ tả mã nguồn phương pháp đề xuất, baselines, cấu hình máy thử nghiệm 8) Ngày 23 tháng 05 năm 2022 Giáo viên hướng dẫn Tác giả luận văn CHỦ TỊCH HỘI ĐỒNG SĐH.QT9.BM11 Ban hành lần ngày 11/11/2014 Graduation Thesis Assignment Name: Hoang Thanh Dat Phone: +84343407959 Email: Dat.HT202714M@sis.hust.edu.vn; thanhdath97@gmail.com Class: 20BKHDL-E Affiliation: Hanoi University of Science and Technology Hoang Thanh Dat - hereby warrants that the work and presentation in this thesis performed by myself under the supervision of Assoc Prof Huynh Quyet Thang All the results presented in this thesis are truthful and are not copied from any other works All references in this thesis including images, tables, figures, and quotes are clearly and fully documented in the bibliography I will take full responsibility for even one copy that violates school regulations Student Signature and Name Acknowledgement I would like to express my gratitude to my primary supervisor, Assoc Prof Huynh Quyet Thang, who not only guided me throughout this project but also encouraged me during my years of university I would also wish to show my appreciation to Mr Huynh Thanh Trung who read my numerous revisions and helped make some sense of the confusion I also want to extend my special thanks to Dr Nguyen Quoc Viet Hung who inspired me and helped me in my the research career I would like to thank to all the lecturers in the School of Information and Communication Technology, who provided valuable knowledges and experiences during the Master program I also like to thank my friends and family who supported me and offered deep insight into the study, especially Mr Tong Van Vinh and Mr Pham Minh Tam who supported me with the experimental machines and expand my analysis for this work Abstract Location-based service networks (LBSNs) such as Facebook, Instagram have emerged recently and attracted millions of people[1], allowing users to share their real-time experiences via checkins LBSN data has become a primary source for various applications from studying human mobility and social network analysis[2][3] In LBSNs, there are two essential tasks, friendship prediction and POI recommendation that have been widely researched While the friendship prediction aims to suggest the social relationship that will be formed in the future, the POI recommendation predicts the location user will visit in a given time The two task is correlated, using the mobility data can greatly enhance the friendship prediction performance[4] and vise versa Traditional approaches often require expert domain knowledge, designing a set of hand-crafted features from user mobility data (e.g co-location rates[5][3]) or user friendships (e.g Katz index[3][6]) and combine those features for downstream tasks These approaches, though requiring huge human efforts and domain experts but lack of generalizability to different applications[7] Recent techniques capture the joint interactions between social relationship and user mobility by applying graph embedding techniques[8][9][10] The graph embedding techniques embed the nodes into low-dimensional embedding spaces that can be transferred into downstream tasks but can only learn the information from pairwise relationships, they can not handle the complex characteristics of a checkin and divide a checkin into classical pairwise edges, thus result in loss of information We analyse that LBSN graph is heterogeneity and indecomposable, thus traditional techniques on classical graphs can not capture the deep semantic in LBSN data Recently, the Graph Neural Networks (GNNs) have attracted wide attention [11][12] due to their capability in capturing the complex structural context in a graph Traditional graph neural network, however, can only learn pairwise relationships Therefore, hypergraph convolution[13] was released to model the hypergraph and learn the n-wise proximity from hyperedges In this work, we propose HC-LBSN, a heterogeneous hypergraph convolution for LBSN task Using the LBSN data, our method first constructing the LBSN heterogeneous hypergraph which contains four types of nodes (user, time stamp, POI and category) and two types of hyperedges (friendship and checkin) Hence, we apply several hypergraph convolution layers to capture the complex structural context The embedding of every nodes are learned in a unified vector space, however, we not directly compare the similarity of nodes in the encoding space for downstream tasks but stack the decoding layers to transform encoded node vectors into comparable vector spaces We analyse that the two essential downstream tasks can be transformed into one problem: scoring hyperedges (see section 3.5 for more details) Therefore, we apply a common method for the both tasks In particular, for each hyperedge candidate, a hyperedge embedding is generated and pass into a N-tuplewise similarity function in order to measure its existence Extensive experiments illustrate the improvement of our model over baseline approaches, proving that our method can capture the deep semantic in LBSN data, and dealing with the heterogeneity and indecomposable of LBSN hypergraph The analysis of hyperparameter sensitivity has shown that future work should overcome the balance parameters to automatically adjust an appropriate parameter for various datasets, and also apply the graph attention mechanism The result of this work can be applied to analyse and build features for a social network platform such as suggesting friends and recommending tourist places The proposed model can also be applied for learning on different dataset in other domains due to its generalizability and no expert knowledge requirement Student Signature and Name TABLE OF CONTENTS CHAPTER INRODUCTION 1.1 Location-based social networks (LBSNs) 1.2 Research history 1.3 Research challenges 1.4 Our proposed method 1.5 Contributions and Thesis Outline 1.6 Selected Publications CHAPTER BACKGROUND 11 2.1 Learning on LBSNs data 11 2.2 Graph Embedding Techniques 11 2.2.1 Overview 11 2.2.2 Deepwalk 13 2.2.3 Graph Neural Networks 14 2.2.4 Heterogeneous graph learning 16 2.2.5 Hypergraph and Hypergraph convolution 20 CHAPTER MULTITASK LEARNING FOR LBSNs USING HYPERGRAPH CONVOLUTION 24 3.1 Framework Overview 24 3.2 Notations and Definitions 26 3.3 LBSNs Hypergraph Construction 26 3.4 Hypergraph convolution 28 3.5 Loss function 29 3.6 Hyperedge embedding function 32 3.7 Optimization 33 CHAPTER EXPERIMENTS 34 4.1 Setting 34 4.1.1 Datasets 34 4.1.2 Implementation 35 4.1.3 Downstream tasks and metrics 36 4.1.4 Baselines 37 4.1.5 Hyperparameter setting 37 4.2 End-to-end comparison 38 4.2.1 Friendship prediction 38 4.2.2 POI recommendation 39 4.3 The effectiveness of hyperedge embedding functions 42 4.4 Hyperparameter sensitivity 44 4.4.1 Checkin hyperedge weight 44 4.4.2 The number of hypergraph convolution layers 46 CHAPTER CONCLUSION 49 LIST OF FIGURES 1.1 Example of LBSNs 2.1 Illustration of a general GNN inductive framework on a specific node (the red node)[12] 15 2.2 Example of heterogeneous graph for bibliographic network[33] 16 2.3 Example of a heterogeneous graph where node a and b should have similar embedding, different node colors represent for different node types 19 2.4 The different between a simple graph (a) and a hypergraph (b)[13] 20 2.5 Example of node degrees on a simple graph (a) and a hypergraph (b) 22 2.6 Example of node degrees on a weighted simple graph (a) and a weighted hypergraph (b) 22 3.1 Illustrate the HC-LBSN framework 25 4.1 Friendship prediction performance of HC-LBSN (blue line) with other techniques on four experimental datasets SP, KL, JK and IST 40 4.2 POI recommendation performance (Hit@3) of HC-LBSN comparing to other techniques on experimental datasets IST, SP, KL and JK 41 4.3 POI recommendation performance (Hit@5) of HC-LBSN comparing to other techniques on experimental datasets IST, SP, KL and JK 41 4.4 POI recommendation performance (Hit@10) of HC-LBSN comparing to other techniques on experimental datasets IST, SP, KL and JK 42 4.5 Friendship prediction performance on SP, KL, JK and IST dataset with various hyperedge embedding functions 42 4.6 POI recommendation performance on SP, KL, JK and IST dataset with various hyperedge embedding functions 43 4.7 Friendship prediction performance of HC-LBSN when increasing the checkin hyperedge weight in four experimental datasets 45 4.8 POI recommendation performance of HC-LBSN when increasing the checkin hyperedge weight in four experimental datasets 45 4.9 Friendship prediction performance of HC-LBSN when increasing the number of hypergraph convolution layers in SP, JK and KL dataset 47 4.10 POI recommendation performance of HC-LBSN when increasing the number of hypergraph convolution layers in SP, JK and KL dataset 47 Figure 4.2: POI recommendation performance (Hit@3) of HC-LBSN comparing to other techniques on experimental datasets IST, SP, KL and JK Figure 4.3: POI recommendation performance (Hit@5) of HC-LBSN comparing to other techniques on experimental datasets IST, SP, KL and JK 0.036 while LBSN2Vec archieves only 0.005 In JK dataset, our method HCLBSN is almost times better than LBSN2Vec, 0.084 vs 0.012 in HC-LBSN and LBSN2Vec, respectively The results illustrate that our proposed method is very effective in capturing the deep semantic in LBSN hypergraph We analyse that the performance gain comes from the deep structural of multi-layers hypergraph convolution and the N-tuplewise similarity function used for POI prediction The two architectures enable capturing the structural of the original hypergraph without decomposing it into simpler graphs such as Deepwalk, Node2Vec and LBSN2Vec when their methods are inherited from the random walk based meachanism 41 Figure 4.4: POI recommendation performance (Hit@10) of HC-LBSN comparing to other techniques on experimental datasets IST, SP, KL and JK 4.3 The effectiveness of hyperedge embedding functions In this section, we evaluate various hyperedge embedding functions g to examine how they influence into our downstream tasks The hyperedge embedding functions include Mean, Sum, FPool and Concatenation which have been mentioned in section 3.6 For FPool method, since the number of nodes in a friendship hyperedge is only 2, the weights are set default to 0.5 for each node, we learn the assignment matrix for checkin hyperedges only The other settings for this experiment are the same as section 4.1.5 with the checkin weight is set to 0.001 and the number of layers set to 4, we change only the hyperedge embedding function g The comparison results for different hyperedge embedding functions are shown in figure 4.5 and 4.8 Figure 4.5: Friendship prediction performance on SP, KL, JK and IST dataset with various hyperedge embedding functions In figure 4.5, the concatenation method outperform other hyperedge embedding functions including Mean and Sum and FPool This is because the Mean and Sum functions cause information loss, while Concatenation method fully preserves the 42 Figure 4.6: POI recommendation performance on SP, KL, JK and IST dataset with various hyperedge embedding functions information of different nodes in a checkin In particular with the Precision@10 value, for SP dataset, while Concatenation method archieves 0.052, the Mean function produces only 0.045 while Sum function gets 0.041, similar trends for Recall@10 and F1@10 metrics In this experiment with FPool algorithm, as mentioned above, the weight for each user node in friendship hyperedges is set to 0.5, thus it is expected that the performance of FPool should be similar as Mean function However, due to the difference in checkin hyperedge embedding, it affects to the unified node embedding and thus it influences to the embedding quality for friendship prediction task Comparing to Mean and Sum, the performance of FPool is comparable, the Precision@10 on SP dataset is 0.047, higher than Mean with 0.045 and Sum with 0.041 but it is lower than Mean and Sum on JK dataset The difference may be due to randomness In figure 4.6, the performance gap between Concatenation, FPool, Mean and Sum is very large Particularly, in SP dataset, while Concatenation method archieves 0.0828 in Hit@3 metric, the Mean performs only 0.0073 and Sum 0.0149, FPool with 0.0063 The performance gain of Concatenation comparing to Mean is about 11.3 times For JK dataset, Concatenation produces 0.0839, about 5.8 times higher than the Sum algorithm The performance of FPool for this task on SP and JK dataset is even lower than Mean and Sum, but it is higher in IST dataset The reason is because of the unstable during training and randomness Since all the performance of Mean, Sum and FPool are very low, it doesn’t illustrate that these methods can capture the information of checkin hyperedges This experiment has shown that different hyperedge embedding functions have greatly impact on downstream tasks The Concatenation method is simple but effective for generating hyperedge embedding and thus outperforms other baseline approaches such as common graph pooling techniques including Mean and Sum and FPool The baseline methods can not capture the hyperedge embedding since these methods are only applied for homogeneous domains when all nodes 43 have the same type In the end, the experiment illustrates that in order to capture the heterogenity of hyperedge, the hyperedge embedding can not be in the same embedding space as its node embeddings but must be transformed to a different embedding space 4.4 Hyperparameter sensitivity In this section, we evaluate the importance of each component in our model by changing the hyperparameter values There are various hyperparameters that influences to the embedding quality such as the checkin hyperedge weight, the number of hypergraph convolution layers In the next sections, we provide experiments for each important hyperparameter to evaluate their influence to the final embedding quality 4.4.1 Checkin hyperedge weight We first evaluate the influence of friendship hyperedges and checkin hyperedges into embedding quality by changing the checkin hyperedge weight Since the hyperedge weight aims for balancing the importance of social relationship and user mobility into the embedding quality, adjusting the checkin hyperedge weight changes the affect of friendship edges and checkin edges into the final embedding quality In theory, higher checkin hyperedge weight may lead to better results for POI recommendation task, and vice versa, small checkin hyperedge weight may cause to poor results on this task However, this assumption may not true due to the importance of friends relationship that can support to predict the future checkins In order to analyse our hypothesis, we experiment the influence of the checkin hyperedge weight by changing its value ranging from 0.0 to 1.0 and keep the original hyperparameter setting in section 4.1.5 The results are reported in figure 4.7 and 4.8 While figure 4.7 illustrates the influence of checkin hyperedge weights into the friendship prediction task, figure 4.8 reflects its importance to the other downstream task In figure 4.7, we evaluate our proposed model HC-LBSN on four experimental datasets SP, JK, KL and IST with three metrics Precision@10, Recall@10 and F1@10 As observed, in all metrics, the performance tends to decrease while increasing the checkin weight from 0.0 to 1.0 For SP dataset, the F1@10 increases slightly 44 Figure 4.7: Friendship prediction performance of HC-LBSN when increasing the checkin hyperedge weight in four experimental datasets Figure 4.8: POI recommendation performance of HC-LBSN when increasing the checkin hyperedge weight in four experimental datasets from 0.050 to 0.054 when the checkin weight raises from 0.0 to 0.0001, the gap is small, however it shows that using the checkin information may leads to better predicting future friendship The Precision@10 on KL dataset very slightly decreases from 0.062 to 0.061 when the checkin hyperedge weight changes from 0.0001 to 0.01 Also on SP dataset, it decreases with a small gap from 0.054 to 0.051 The same phenomenom with F1@10 metric, the F1@10 values slightly reduces when changing the checkin weight from 0.0001 to 0.01 However, when adjusting this hyperparameter from 0.01 to 1.0, it significantly influences into the embedding quality Particularly, in SP dataset, the Precision@10 reduces from 0.051 to 0.032 when the checkin weight is 0.01 and 1.0, respectively The performance loss is nearly 40% In KL dataset, the loss is about 65%, from 0.061 downto 0.021 Thus the Precision@10 is influenced, the values dramatically reduce when adjusting the checkin weight in this range In conclusion, the friendship prediction performance changes drastically while the checkin weight ranging from 0.0 to 1.0 Another observation for POI recommendation task, when the weight value for checkin is 0, the Hit@K values in all datasets are low, illustrating that using the user relationship only can not predict their future locations The performance for 45 POI recommendation task on the four experimental datasets are varied, the Hit@3 values of SP and IST datasets raise when increasing the checkin weight from 0.0001 to 1.0 while in JK dataset, it tends to decrease Particularly, in SP dataset, when changing the checkin weight from 0.0001 to 0.01 the performance mildly elevates from 0.090 to 0.115, the performance gain is about 27% However, on JK dataset, increasing the checkin weight from 0.0001 to 1.0 leads to downgrade in POI prediction performance For instance, while the Hit@3 is 0.095 with the checkin weight 0.0001, it decreases to 0.054 when the checkin weight set to 1.0 The loss is huge, more than 43% It indicates that the friends relationship influences to the POI recommendation accuracy, using the friendship information can greatly enhance the prediction of user’s future location Thus it shows that the two downstream tasks can not be separated but should be learned simultaneously, the experiment on JK dataset strengthen our theory Though two essential experiments for downstream tasks, we analyse that our model is more stable with the value of checkin edge weight ranging from 0.0001 to 0.01 The value 0.001 is thus set to balance the importance of social relationship and user mobility Other experiments are performed with the value of checkin hyperedge weight set to 0.001 4.4.2 The number of hypergraph convolution layers Another key parameter is the number of hypergraph convolution layers In graph neural network, stacking more layers enables capturing more deep and complex relationship between nodes, however, too many layers cause vanishing/exploding gradient problem[47] Therefore, choosing an appropriate number of layers is necessary In traditional graph neural network for pairwise edges, the number of GNN layers is often set from to We expect this assumption is similar for hypergraph convolution since it is also one kinds of GNNs In this section, we adjust the number of hypergraph convolution from to to analyse the effectiveness of the deep GNNs structure into final embedding quality for downstream tasks Due to the limitation of physical devices, we only run experiments on SP, JK and KL dataset The memory requirement increases while stacking more convolution layers In this section, we keep the original hyperparameter setting in section 4.1.5 and only adjusting the number of layers The results for friendship prediction task and POI recommendation are shown in figure 4.9 and 4.10 Here we evaluate the Precision@10, Recall@10 and F1@10 only, because higher number of K produces similar trends for this experiment 46 Figure 4.9: Friendship prediction performance of HC-LBSN when increasing the number of hypergraph convolution layers in SP, JK and KL dataset Figure 4.10: POI recommendation performance of HC-LBSN when increasing the number of hypergraph convolution layers in SP, JK and KL dataset Figure 4.9 reflects the sensitivity of the number of hypergraph convolution layers Increasing the number of layers marginally rises the performance for friendship prediction task For instance in SP dataset, the Precision@10 rises drastically from 0.033 to 0.056 when the number of layers is set to to The performance gain is nearly 70% In JK dataset, this performance gain is more than two times, from 0.033 to 0.069 Similar trends are found in Recall@10 and F1@10 metrics This is because stacking more hypergraph convolution layers allow HC-LBSN to capture higher-order proximity At each aggregation step, the vector representation of a central node can be updated using the information from the ego network of radius L, L is the number of layers Thus, more hypergraph convolution allows capturing information from a far distance However, too many layers may not lead to better embedding quality and also lead to vanishing or exploding gradient descent problem[47] It is observed while changing the number of layers from to and In SP dataset, the performance gain is small when adjusting the number of layers from to 5, Precision@10 from 0.052 to 0.056 and then decreases to 0.053 when the number of layers is set to In JK dataset, the F1@10 score remain the same as 0.101 with the number of layers changing from to 5, and thus drastically decreases to 0.088 with layers The experiment illustrates that the number of layers significantly influences to the embedding quality for friendship 47 prediction task For POI recommendation task, the performance lines rises when changing the number of layers from to or and later decreases Particularly, for SP dataset, the F1@10 score changes from 0.153 to 0.173 when the number of layers ranging from to and therefore it decreases drasmatically to 0.103 with layers For JK dataset, the F1@10 score is also first rising, from 0.115 to 0.167 with the number of layers from to 4, and reduce fast to 0.0476 when L = The performance loss when adjusting the number of layers from to in JK dataset is huge, nearly 250% Through experiments on hypergraph sensitivity, it shows that using the information from social relationship can greatly enhance the embedding quality for predicting human mobility In fact, it is true that people often visit a place where their friends have went to Thus the experiment on the checkin hyperedge weight strengthens this hypothesis The two hyperparameters, checkin hyperedge weight and number of layers, strongly influence to the embedding quality for the both downstream tasks, friendship prediction and POI recommendation These hyperparameter, hence, should be chosen carefully when learning on different dataset, a comprehensive experiment is neccessary, in order to configure a good model for required downstream tasks The experiment on hypergraph sensitivity also opens a research direction for future, to automatic adjusting the balance parameters through training process An other challenging task is to apply the graph attention on LBSN dataset, in theory, the attention mechanism can not directly be implemented on LBSN hypergraph This is because the hypergraph attention is only feasible when the vertex set and the hyperedge set are from the same homogeneous domain, since only in this case, the similarities between vertices and hyperedges are directly comparable[13] Thus the node embeddings must be transformed into other homogeneous domain in order to apply the graph attention Such task is challenging due to a significant number of pairwise relationships requires a large number of latent space for attention mechanism The attention mechanism also has to deal with the indecomposable of hyperedges 48 CHAPTER CONCLUSION In this work, we address the representation learning on location-based social networks (LBSNs) which is a challenging task but has various applications in many domains such as studying human mobility and social network analysis To overcome the challenges of LBSNs data, we develop a method that supports multitask learning, dealing with the indecomposibility and heterogenity of LBSN hypergraph Thus the method enhances the embedding quality for the two essential downstream tasks, friendship prediction and POI recommendation To this end, we develop a novel approach that applies several hypergraph convolution layers, allowing capturing the deep semantic structures in LBSN hypergraph We also implement the N-tuplewise similarity functions that deals with the indecomposibility and measures the existence of a hyperedge Thus a common method is applied to evaluate the both task friendship prediction and POI recommendation The proposed method can be trained in a multi-tasked and end-toend fashion, thus can boost the performance for various downstream tasks at the same time In order to balance the influence between social relationship and user mobility, we normalize the messages from neighborhood in the message-passing mechanism by adjusting the checkin hyperedge weight to adjust the influence from different edge types We provide extensive experiments on two downstream tasks in four real-world datasets to illustrate our technique comparing with other baseline approaches End-to-end comparison experiments show that our proposed method outperforms other techniques on the two tasks, especially on POI recommendation We also show the effectiveness of different hyperedge embedding functions to illustrate that a simple concatenation method is the most effective function to generate hyperedge embedding without causing loss of information To this end, we show the impact of various hyperparameters such as the checkin hyperedge weight and the number of layers to the performance of downstream tasks The result of this work has an important meaning, supporting to analyse social networks It can also be used to build features for a LBSN platform such as suggesting friends and recommending tourist places The proposed model is not only applied for LBSN but also can be exploited for other domains due to its generalizability and no expert domain requirements For example, in text summarization[35], hypergraph can be used to represent different relationships between words, sentences and documents such as words in sentences, word-to-document, 49 sentence-to-document, Thus HC-LBSN can be applied to capture the deep semantic structures in a words graph and enhance the embedding quality In the future work, we would want to extend our framework to better dealing with the influence of social relationship and human mobility to each other performance Particularly, implementing hypergraph attention mechanism on LBSN seems promising due to the capability of graph attention in many downstream tasks[22] However, the attention method has to deal with the heterogenity of LBSN data with various kinds of nodes and edge types Since it can only be applied to homogeneous domains, a suggested solution is to transform node embedding into different embedding spaces, but a future method will also have to overcome with the indecomposable of hyperedges Moreover, we expect future work should also aim on automatic adjusting the balance parameters (e.g the checkin hyperedge weight) through training process 50 REFERENCES [1] P Kefalas, P Symeonidis, and Y Manolopoulos, “A graph-based taxonomy of recommendation algorithms and systems in lbsns,” IEEE Transactions on Knowledge and Data Engineering, vol 28, no 3, pp 604–622, 2015 [2] E Cho, S A Myers, and J Leskovec, “Friendship and mobility: User movement in location-based social networks,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp 1082–1090 [3] S Scellato, A Noulas, and C Mascolo, “Exploiting place features in link prediction on location-based social networks,” in KDD, 2011, pp 1046– 1054 [4] D Yang, B Qu, J Yang, and P Cudr´e-Mauroux, “Lbsn2vec++: Heterogeneous hypergraph embedding for location-based social networks,” IEEE Transactions on Knowledge and Data Engineering, 2020 [5] D Wang, D Pedreschi, C Song, F Giannotti, and A.-L Barabasi, “Human mobility, social ties, and link prediction,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp 1100–1108 [6] L Katz, “A new status index derived from sociometric analysis,” Psychometrika, vol 18, no 1, pp 39–43, 1953 [7] D Yang, B Qu, J Yang, and P Cudre-Mauroux, “Revisiting user mobility and social relationships in lbsns: A hypergraph embedding approach,” in The world wide web conference, 2019, pp 2147–2157 [8] M Ou, P Cui, J Pei, Z Zhang, and W Zhu, “Asymmetric transitivity preserving graph embedding,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016, pp 1105–1114 [9] J Qiu, Y Dong, H Ma, J Li, K Wang, and J Tang, “Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec,” in Proceedings of the eleventh ACM international conference on web search and data mining, 2018, pp 459–467 51 [10] B Perozzi, R Al-Rfou, and S Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp 701– 710 [11] T N Kipf and M Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016 [12] W L Hamilton, R Ying, and J Leskovec, “Inductive representation learning on large graphs,” arXiv preprint arXiv:1706.02216, 2017 [13] S Bai, F Zhang, and P H Torr, “Hypergraph convolution and hypergraph attention,” Pattern Recognition, vol 110, p 107 637, 2021 [14] Y Zheng, “Location-based social networks: Users,” in Computing with spatial trajectories, Springer, 2011, pp 243–276 [15] Q V H Nguyen, K Zheng, M Weidlich, et al., “What-if analysis with conflicting goals: Recommending data ranges for exploration,” in ICDE, 2018, pp 89–100 [16] F K Glăuckstad, Terminological ontology and cognitive processes in translation, in Proceedings of the 24th Pacific Asia conference on language, information and computation, 2010, pp 629–636 [17] A Sadilek, H Kautz, and J P Bigham, “Finding your friends and following them to where you are,” in Proceedings of the fifth ACM international conference on Web search and data mining, 2012, pp 723–732 [18] C Song, Z Qu, N Blumm, and A.-L Barab´asi, “Limits of predictability in human mobility,” Science, vol 327, no 5968, pp 1018–1021, 2010 [19] L Backstrom and J Kleinberg, “Romantic partnerships and the dispersion of social ties: A network analysis of relationship status on facebook,” in CSCW, 2014, pp 831–841 [20] A Grover and J Leskovec, “Node2vec: Scalable feature learning for networks,” in KDD, 2016, pp 855–864 [21] X Wang, D Bo, C Shi, S Fan, Y Ye, and P S Yu, “A survey on heterogeneous graph embedding: Methods, techniques, applications and sources,” arXiv preprint arXiv:2011.14867, 2020 [22] P Veliˇckovi´c, G Cucurull, A Casanova, A Romero, P Lio, and Y Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017 52 [23] C Lan, Y Yang, X Li, B Luo, and J Huan, “Learning social circles in egonetworks based on multi-view network structure,” TKDE, vol 29, no 8, pp 1681–1694, 2017 [24] D Zhang, J Yin, X Zhu, and C Zhang, “Network representation learning: A survey,” IEEE transactions on Big Data, vol 6, no 1, pp 3–28, 2018 [25] S Cao, W Lu, and Q Xu, “Grarep: Learning graph representations with global structural information,” in Proceedings of the 24th ACM international on conference on information and knowledge management, 2015, pp 891–900 [26] C Yang, Z Liu, D Zhao, M Sun, and E Chang, “Network representation learning with rich text information,” in Twenty-fourth international joint conference on artificial intelligence, 2015 [27] D Wang, P Cui, and W Zhu, “Structural deep network embedding,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016, pp 1225–1234 [28] T N Kipf and M Welling, “Variational graph auto-encoders,” arXiv preprint arXiv:1611.07308, 2016 [29] T Mikolov, K Chen, G Corrado, and J Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013 [30] T Mikolov, I Sutskever, K Chen, G S Corrado, and J Dean, “Distributed representations of words and phrases and their compositionality,” Advances in neural information processing systems, vol 26, 2013 [31] J Lee, I Lee, and J Kang, “Self-attention graph pooling,” in International Conference on Machine Learning, PMLR, 2019, pp 3734–3743 [32] X Wang, H Ji, C Shi, et al., “Heterogeneous graph attention network,” in The world wide web conference, 2019, pp 2022–2032 [33] X Kong and S Y Philip, Graph classification in heterogeneous networks 2018 [34] L Yao, C Mao, and Y Luo, “Graph convolutional networks for text classification,” in Proceedings of the AAAI conference on artificial intelligence, vol 33, 2019, pp 7370–7377 [35] D Wang, P Liu, Y Zheng, X Qiu, and X Huang, “Heterogeneous graph neural networks for extractive document summarization,” arXiv preprint arXiv:2004.12393, 2020 53 [36] S Hou, Y Ye, Y Song, and M Abdulhayoglu, “Hindroid: An intelligent android malware detection system based on structured heterogeneous information network,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp 1507–1515 [37] H Chen, H Yin, W Wang, H Wang, Q V H Nguyen, and X Li, “Pme: Projected metric embedding on heterogeneous networks for link prediction,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp 1177–1186 [38] B Hu, Y Fang, and C Shi, “Adversarial learning on heterogeneous information networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp 120–129 [39] Y Dong, N V Chawla, and A Swami, “Metapath2vec: Scalable representation learning for heterogeneous networks,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp 135–144 [40] Y He, Y Song, J Li, C Ji, J Peng, and H Peng, “Hetespaceywalk: A heterogeneous spacey random walk for heterogeneous information network embedding,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp 639–648 [41] D Zhang, J Yin, X Zhu, and C Zhang, “Metagraph2vec: Complex semantic path augmented heterogeneous network embedding,” in Pacific-Asia conference on knowledge discovery and data mining, Springer, 2018, pp 196– 208 [42] K Tu, P Cui, X Wang, F Wang, and W Zhu, “Structural deep embedding for hyper-networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol 32, 2018 [43] W Zhang, Y Fang, Z Liu, M Wu, and X Zhang, “Mg2vec: Learning relationship-preserving heterogeneous graph representations via metagraph embedding,” IEEE Transactions on Knowledge and Data Engineering, 2020 [44] S Agarwal, K Branson, and S Belongie, “Higher order learning with graphs,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp 17–24 [45] A L Maas, A Y Hannun, A Y Ng, et al., “Rectifier nonlinearities improve neural network acoustic models,” in Proc icml, Citeseer, vol 30, 2013, p 54 [46] J Jo, J Baek, S Lee, D Kim, M Kang, and S J Hwang, “Edge representation learning with hypergraphs,” Advances in Neural Information Processing Systems, vol 34, 2021 [47] K Xu, W Hu, J Leskovec, and S Jegelka, “How powerful are graph neural networks?” arXiv preprint arXiv:1810.00826, 2018 [48] H V Pham, D H Thanh, and P Moore, “Hierarchical pooling in graph neural networks to enhance classification performance in large datasets,” Sensors, vol 21, no 18, p 6070, 2021 [49] Z Ying, J You, C Morris, X Ren, W Hamilton, and J Leskovec, “Hierarchical graph representation learning with differentiable pooling,” Advances in neural information processing systems, vol 31, 2018 [50] D P Kingma and J Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014 [51] M Fey and J E Lenssen, “Fast graph representation learning with pytorch geometric,” arXiv preprint arXiv:1903.02428, 2019 [52] D Liben-Nowell and J Kleinberg, “The link-prediction problem for social networks,” JASIST, vol 58, no 7, pp 1019–1031, 2007 55 ... the same subgraph and therefore their embedding should be similar Motivate from using structural context, we analyse the using of graph neural network into LBSNs Recently, graph neural network has... MULTITASK LEARNING FOR LBSNs USING HYPERGRAPH CONVOLUTION This chapter, we describe our proposed framework for learning LBSN data The method is called HypergraphConvolution -LBSN (HC -LBSN) , aiming... Networks 14 2.2.4 Heterogeneous graph learning 16 2.2.5 Hypergraph and Hypergraph convolution 20 CHAPTER MULTITASK LEARNING FOR LBSNs USING HYPERGRAPH CONVOLUTION 24 3.1

Ngày đăng: 12/08/2022, 23:15

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN