1. Introduction
Recommendation systems suggest items to users within that systems. Items can include products on an online shopping platform, movies in a theater, books in a library, or tourist attractions. Recommendation systems must look for similarities between users as well as among items to archive good results. Based on the results, the system will select a set of items with high similarity scores and recommend them to the intended users. In a world of explosive information nowadays, both the number of users and the number of items grow rapidly. Classical recommendation models that use matrix factorization (MF) [1] and singular value decomposition (SVD) techniques [2] are quickly overloaded because the size of the user and product interaction matrix grows rapidly, and the value is constantly updated over time. Furthermore, the system's calculation speed must be fast enough to make timely recommendations. These factors make it difficult for information mining models to process data from a variety of sources quickly.
Graph neural network-based recommendation models calculate feature embeddings for both users and items, as well as the level of interaction between a specific user and a candidate item. The collaborative filtering mechanism is used during the propagation process, which has the outstanding advantages of directly recognizing similarities between users when they share many interacted items, as well as indirectly when the process of iterative propagation creates a chain of interactions that can be extended from user to item to user. The size of the embeddings is chosen so that data loss after the learning process stays within an acceptable threshold. Not only is the interaction between users and items investigated, but contextual information also influences the overall recommendation results. The first significant source of information is the user's real-life relationships or friend information obtained from online social networking platforms. According to [3, 4, 5], advice from friends gave more weight than other sources of information, so friendship data must be incorporated into the overall data retrieval process. Items will be correlated if they belong to the same catalogue or a closely related product group. When items are geographical locations within a city or area, their actual distance will reflect the correlation observed during a visit by a specific user. When a user is staying in a specific location, nearby locations are obviously more likely to be visited than farther locations.
In this article, we propose a model for transforming input data into interaction information matrices for users and items, correlation between users, and item similarities. These information matrices will be interacted with to spread information, with the goal of recording the characteristics of each user and item and perfecting the model-based recommendation system. In the following chapters, we also experiment on popular datasets, compare the results to various measures, and discuss the results.
2. Related work
In this chapter, we present important foundations for the proposed model, including publications on collaborative filtering (CF), graph convolution networks (GCN), social recommendation system models, location-based applications, and related data processing techniques.
2.1 Social-aware recommendation system
When online social networks first appeared and were integrated into e-commerce platforms, social recommendation systems were developed. These systems provide users with personalized titles based on their actions, such as clicks, sharing, and commenting. Additional data from social networks is also analyzed to determine the influence of their friends [3]. The vast amount of information on social networks has influenced users, causing them to share the same concerns as their friends [6, 7].
ContextMF is an MF-based algorithm that uses contextual information in the recommendation process. This method combines social and rating data using probabilistic matrix factorization [8]. TrustSVD [4], built on SVD++ [9], includes the impact of friends on social relationships as additional implicit feedback for the observed users. GraphRec is a graph neural network-based algorithm developed for social recommendation tasks. GraphRec employs a novel graph neural network framework to simultaneously capture interactions and opinions in the user-item graph. It coherently models two graphs with different strengths [10]. SocialLGN is a graph neural network-based algorithm specifically designed for social recommendation tasks. SocialLGN employs a graph to learn user and item embeddings by propagating information from the user's social network [11]. It has been shown to outperform other cutting-edge social recommendation methods on several benchmark datasets. The complex relationship between social relationship data and user influence in shopping is difficult to understand because it changes over time and depends on the type of item. The Social Cognitive Self-Monitoring Tri-training (SEPT) model [12] uses three graph encoders to derive temporal signals from user social relationships, user correlations when interested in similar items, and information-sharing to improve recommendation predictions. The self-monitoring learning process in that model increases the effectiveness of the recommendations.
2.2 Graph convolution network
Graph neural network (GNN) is a type of neural network that operates on datasets as graphs [13]. Graphs are data structures that consist of nodes and edges. A node can be either a user or an item, and edges define the relationships between nodes. Inheriting the advantages of GNN, GCN continues to explore the correlation between nodes through more hops by repeating the CF process with the propagation process on the graph. GCNs apply filters to the graph, inspecting nodes and edges that can be used to classify other nodes in the data. GraphSAGE is a type of graph convolutional network that learns node embeddings by gathering data from the node's immediate surroundings [14]. It works by selecting a fixed number of neighbors for each node and aggregating their features to calculate the node embedding. GraphSAGE can be applied to a variety of tasks, including node classification, link prediction, and graph classification. NGCF is a type of graph convolutional network intended for CF tasks [15]. NGCF employs a neural network to learn node embeddings by combining data from the node's local neighborhood and the global graph structure. To compute the node embedding, it uses a weighted sum of the embeddings of its neighbors. LightGCN is a simplified version of NGCF that is intended to be more efficient and scalable [16]. This model learns node embeddings through a simple graph convolution operation that propagates information from the node's neighbors. It computes the node embedding in a similar fashion to NGCF. LightGCN has demonstrated state-of-the-art performance on several benchmark datasets while being significantly faster than other methods. In a recent publication [17], we used the GCN model to combine user friendship signals and interaction data. In that model, we calculated users' influence weights and used them as attention signals, accelerating convergence during the collaborative filtering process.
2.3 Location-based recommendation system
As shown in Fig. 1, the location-based recommendation system will look at the user's check-in history and recommend other suitable locations. We can refer to famous or popular places as points of interest (POI). Traditional POI recommendation methods treat POIs as items and use techniques such as CF and MF [18] on the interaction matrix between users and POIs. However, a POI is more than just an item; it is also associated with geographical information and the user's appearance time at that location, which is why models should consider including those pieces of information in the signal collection process. The PACE model in publication [19] proposed a semi-supervised deep learning model based on CF to mitigate the effects of data sparsity by anticipating users' preferences and contexts. Furthermore, since trust between users influences recommendation results; the model in [20] incorporated a trust impact factor among users into their model. MF was also used in Rank-geofm [21], which combined time and location data into user check-in sequences and calculated location scores. Geo-Teaser in [22] predicts locations by combining a temporal POI embedding model with a geographical preference ranking. GeoIE [23] applied geographical model influence on POIs using the inner product of geo-susceptibility vectors and geography influence. When a user visits multiple POIs over time, they become clustered and are accounted for by Tobler's first law of geography [24]. These clusters also have multi-center characteristics, indicating separate visited locations. A two-dimensional KDE was used to learn the spatial clustering phenomenon [25, 26] and estimate the number of clusters. GeoSAN [27] proposed a self-attention-based POI encoder that integrates into the self-attention network to survey the user's travel history.
Fig. 1. An overview of a location-based recommendation system that aggregates signals from collaborative filtering of common locations and geographical distance using the Haversine equation.
3. Proposed model
In this chapter, we propose a POI recommendation model that includes data preprocessing operations, input matrix construction, GCN model propagation processes, loss functions, and prediction methods for recommendations. In Table 1, we define the notation used in this publication.
Table 1. Notation in equations and model.
3.1 Data preprocessing
We propose Algorithm 1 for removing users with fewer than k check-ins from the dataset. The datasets extracted from social networking platforms contain information about the user's interaction with the item as well as the characteristics of the items in the system. Location-based recommendation systems provide a longitude and latitude value for a given location. The interaction between the user and the item is limited because the user only interacts with a few items. Furthermore, only users with a few check-ins at k locations are retained to ensure that outliers do not skew the recommendation results.
Algorithm 1. Remove outlier items in the original dataset
This method is known as k-core and is commonly used in publications [15, 16]. In our experiment, we will use a 10-core setting, as is common in similar publications. On the other hand, using location data, we can calculate the correlation between each pair of locations based on their geographical distance. Geographic distance is one of the most important factors influencing users' decisions to move. Algorithm 1's constant selected_ratio can be used to adjust the ratio of users to locations in datasets after processing and removing outliers. We can also select a scale whose value is equal to the ratio of the original dataset.
3.1.1 Locations correlation by distance
Geographic location refers to a position on Earth that can be defined by two coordinates: longitude and latitude. POIs are well-known public geographic locations, such as attractions, airports, bus stations, museums, and restaurants. When a user visits a point of interest (POI), he or she can check in using mobile apps and online social networking platforms. The geographic coordinates of this POI are recorded using the mobile device's built-in GPS. The geographical distance between two POIs influences users' decisions about future POIs they want to visit. The Haversine formula can be used to calculate the geographical distance between two locations where information is provided: longitude and latitude, as represented in (1).
\(\begin{align}d=2 r \sin ^{-1}\left(\sqrt{\sin ^{2}\left(\frac{\varphi_{2}-\varphi_{1}}{2}\right)+\cos \varphi_{1} \cdot \cos \varphi_{2} \cdot \sin ^{2}\left(\frac{\lambda_{2}-\lambda_{1}}{2}\right)}\right)\end{align}\) (1)
where:
• d is the distance between the two locations,
• r is the radius of the earth, 6371 km,
• φ1, φ2 are the latitude of location 1 and 2, respectively,
• λ1, λ2 are the longitude of location 1 and 2, respectively.
With a dataset of m locations, we create a matrix D with dimensions m×m in which each element Di,j represents the distance d between POI i and POI j using (1). That matrix will be used to calculate the geographical distance embedding eloc in our proposed GCN model.
3.1.2 Locations correlation by collaborative filtering
In addition to their geographic location, POIs have other features that entice users to visit them. The number of users who have checked in at both locations can be used to determine their similarity. According to the publication [28], a weight matrix representing similarity can be calculated using the equation 𝑊 = 𝑅. 𝑅𝑇, where R is the matrix representing interaction between users and items. The value of each element Wi,j is the number of users checking in at locations i and j. However, the frequency with which a user checks in must also be considered, as the more they interact with the system, the more reliable their recommendations become. This data was not recorded in matrix W.
To take advantage of information in both, the number of users who have checked into the same location and how frequently those users interact with the system, we propose using the Jaccard index to determine the logical correlation between locations. If many users visit the two locations at the same time, we anticipate a high degree of similarity between them. As a result, the Jaccard index, shown in Fig. 2, is appropriate for measuring this similarity and can be calculated as (2).
\(\begin{align}\operatorname{Jaccard}\left(P O I_{i}, P O I_{j}\right)=\frac{U_{j} \cap U_{j}}{U_{j} \cup U_{j}}\\\end{align}\) (2)
Fig. 2. Calculating Jaccard index from two sets of locations checked-in by user A and user B.
where Ui and Uj represent the sets of users who have interacted with POIi and POIj, respectively.
However, it is worth noting that propagation in the GCN model captures collaboration signals from the high-order connectivity graph. That is, the indirect influence of the user on the item is also considered, such as user - item - user - item - and so on [3, 15, 16, 17, 28]. In the process, weighting matrices were added to speed up signal collection. This technique has the side effect of overlearning, in which the system focuses on a small number of users or items while eliminating the possibility of low-interaction users or items. To address this issue, we proposed two approaches: calculate the matrices' normalized values or cluster them into different weighted categories. In this publication, we will look at clustering into quartiles in Table 2. After all, the matrix containing convert values will be used to compute the embedding of the Jaccard index in our proposed GCN model.
Table 2. Clustering the correlation between locations by Jaccard index.
3.1.3 Social relation
In the evolution of social networking platforms, users not only post or check in to places but also interact with one another and can become friends on the same or other social networking platforms. Once users become friends, they can receive reviews, advice, and interactions from others in their friend group. These interactions are consistently more powerful than those with strangers [3, 4, 5, 29]. Furthermore, social networking platforms tend to expand a user's network of friends to attract users and increase their time spent on the platform.
In our proposed model, the friendship relationship is extracted from the social networking platform and graphically represented by an undirected graph, then stored as a matrix S which defined by (3). That matrix will be used to compute the embedding of online platform social friendship relationships es in our proposed GCN model.
\(\begin{align}S_{i, j}=\left\{\begin{array}{lc}1, & if \; user_i \; and \; user_i \; are \; friends \\ 0, & otherwise \end{array}\right.\end{align}\) (3)
3.2 Our proposed GCN model
In this section, we propose a model for deconstructing and analyzing information from the inputs, resulting in a prediction matrix as the output. An overview of our proposed model is shown in Fig. 3.
Fig. 3. Overview on our proposed GCN model.
Important components of the model include:
• Matrix R for recording users checking in to locations: In recommendation systems, the interaction between users and items can be represented in two ways: implicitly and explicitly. Implicit datasets store the interaction in binary form, whereas explicit datasets store the user's rating of the item during the interaction. E-commerce systems frequently allow users to rate or provide feedback on items they have purchased or viewed.
• The information enrichment matrices D, J, and S were introduced in the previous section: The first one is matrix D that contains the geographic distance matrix between locations. The second matrix J contains correlation data between locations using the Jaccard index measure. And the last matrix S is the user-extracted friendship data from online social networking platforms. It is important to note that the user friend matrix is optional, as not all e-commerce platforms or recommendation systems collect this information.
• Embeddings in GCN layers: Embeddings are useful for reducing the dimensionality of input information matrices while minimizing information loss. The size of the embedding is passed to the model as a parameter. Initially, the embeddings are initialized using the Xaivier method [30]. We will discuss embeddings in detail in the following sections.
Algorithm 2. describes in detail the calculation steps for the GCN model with K embedding layers. To capture signals, the user and item embeddings are splitted at each iteration. After that, they are combined and stacked to form the final embedding, which returns for the next iteration as an ego embedding.
Algorithm 2. Propagation by Graph Convolution Network
3.2.1 The check-in matrix A
To realize the process of spreading high-order connectivity, the check-in matrix R must be designed in Laplacian form [15]. Then, A is a square matrix of size (n+m), where n and m represent the number of users and locations in the dataset, respectively. 0 is a square matrix in which all elements have the value zero. We illustrated the structure of matrix A in Fig. 4.
Fig. 4. The check-in matrix layout enables concurrent propagation updates for embeddings of users and items.
The Laplacian form of R can be calculated as (4).
\(\begin{align}\begin{array}{l}A=\left[\begin{array}{cc}0 & R \\ R^{T} & 0\end{array}\right] \\ \tilde{A}=D^{-\frac{1}{2}} A D^{-\frac{1}{2}}\end{array}\end{align}\) (4)
where D is the diagonal degree matrix and 0 is the all-zero matrix.
3.2.2 Propagation process for user embedding
The feature signals are captured from input matrices as in (5), our model propagates the user embeddings 𝑒𝑘𝑢 and item embedding 𝑒𝑘𝑖 on the light graph convolution model [16] at every kth layer. After each calculating of each layer, we obtains embeddings 𝑒(𝑘+1)𝑢a and 𝑒(𝑘+1)𝑠, that contains signals of high-order propagation. The embedding 𝑒(𝑘+1)𝑢a keeps user-item interaction signals in the matrix A, while the embedding 𝑒(𝑘+1)𝑠 keeps social signals between users in matrix S. The matrix S is optional because some e-commerce platforms do not store user friendship information.
\(\begin{align}\begin{aligned} e_{u a}^{(k+1)} & =\sum_{i \in \mathrm{N}_{u}^{A}} \frac{1}{\sqrt{\left|\mathrm{N}_{u}^{A}\right|\left|\mathrm{N}_{i}^{A}\right|}} e_{i}^{(k)} \\ e_{s}^{(k+1)} & =\sum_{s \in \mathrm{N}_{u}^{S}} \frac{1}{\sqrt{\left|\mathrm{N}_{u}^{S}\right|\left|\mathrm{N}_{s}^{S}\right|}} e_{u}^{(k)}\end{aligned}\end{align}\) (5)
where |𝑁𝑋𝑞| denotes the number of nearby items of user q in the matrix X with X = [A, S].
The foundation of (5) is based on the symmetric normalization element, which was used in most GCN models [15, 16, 17, 29]. Then the (k+1)th user embeddings are combined by (6).
𝑒(𝑘+1)𝑢 = COMBINATIONK(e(𝑘+1)𝑢a, 𝑒(𝑘+1)𝑠) (6)
After K iterations of the propagation process, we receive (K) users’ embeddings. The final embedding of users will be calculated as (7)
\(\begin{align}e_{u}=\frac{1}{K} \sum_{k=1}^{K} e_{u}^{(k)}\end{align}\) (7)
The result embedding is the mean value of all embeddings at all layers calculated with (7), and this function can be replaced with others such as sum, maximum, and median.
3.2.3 Propagation process for item embedding
Item embedding is also calculated by (8) alongside the propagation process in user embedding.
\(\begin{align}\begin{array}{l}e_{l o c}^{(k+1)}=\sum_{l o c \in \mathrm{N}_{i}^{C}} \frac{1}{\sqrt{\left|\mathrm{N}_{i}^{C}\right|\left|\mathrm{N}_{c}^{C}\right|}} e_{i}^{(k)} \\ e_{d}^{(k+1)}=\sum_{d \in \mathrm{N}_{i}^{D}} \frac{1}{\sqrt{\left|\mathrm{N}_{i}^{D}\right|\left|\mathrm{N}_{d}^{D}\right|}} e_{i}^{(k)} \\ e_{i u}^{(k+1)}=\sum_{u \in \mathrm{N}_{i}^{A}} \frac{1}{\sqrt{\left|\mathrm{N}_{i}^{A}\right|\left|\mathrm{N}_{i}^{A}\right|}} e_{u}^{(k)}\end{array}\end{align}\) (8)
At the end of iteration, the item embedding is combined by (9), which can be a sum function or the application of weights to component embeddings.
𝑒(𝑘+1)𝑖 = COMBINATIONK(e(𝑘+1)𝑙oc, 𝑒(𝑘+1)𝑑, 𝑒(𝑘+1)𝑖u) (9)
The result embedding of items will be obtained by (10).
\(\begin{align}e_{i}=\frac{1}{K} \sum_{k=1}^{K} e_{i}^{(k)}\end{align}\) (10)
3.2.4 Convolution on GNN
GNN models implemented the message-passing technique by extracting signals from inputs and aggregating them into output embeddings [31, 32]. With GCN, output embedding continues to propagate through the graph's structure, resulting in collaborative filtering of both users and items.
• Extract signals: the signals propagated from users to users by (11) and items to users by (12).
𝑚𝑢←𝑢 = 𝑒𝑢 + 𝑒𝑠 (11)
\(\begin{align}m_{u \leftarrow i}=\frac{1}{\sqrt{\left|N_{u}\right|\left|N_{i}\right|}} *\left(e_{i}+e_{l o c}+e_{d}\right)\end{align}\) (12)
• Combine signals: all received messages from nearby users of user u are combined to refine the user embedding, as in (13). The activation function LeakyReLU adds messages encoded with positive and also slightly negative signals [33]. The correlation of users is obtained from both collaborative filtering and addition information matrices J, D, and S.
𝑒𝑢 = LeakyReLU(𝑚𝑢←𝑢 + ∑𝑖∈𝑁𝑢 𝑚𝑢←𝑖) (13)
3.2.5 Optimization on the prediction score
After several propagation iterations, the output embedding vector E∗ will be converged, and the prediction score between user ui and location ij can be calculated by (14).
\(\begin{align}\widehat{y_{u l}}=e_{u}^{\top} e_{i}\end{align}\) (14)
We proposed a sampling function and prediction method to evaluate precision and recall score of our model. The sampling process must include both positive and negative samplers. The positive sampler will select locations where the user has previously interacted, whereas the negative sampler will select locations where the user has never interacted. In the prediction evaluation phase, the set of locations chosen by the positive sampler will be used to evaluate precision, whereas the other set of locations chosen by the negative sampler will affect the recall.
Bayesian personalized ranking (BPR) is the best choice for implementing the loss function because it is the most effective ranking method for datasets with implicit feedback [34]. We use two pooling observable sets: Ω+ui is observed check-ins and Ω−uj is unobserved check-ins. The loss function has been implemented by (15).
\(\begin{align}\operatorname{Loss}_{B P R}=\sum_{\Omega_{u i}^{+}} \sum_{\Omega_{u j}^{-}}-\ln \sigma\left(\widehat{y_{u l}}-\widehat{y_{u j}}\right)+\lambda\|\Phi\|_{2}^{2}\end{align}\) (15)
where Φ is embedding E∗ and σ(.) is sigmoid function and λ controls the regularization.
4. Experiments and results
To validate our surveys and proposed models, we conducted experiments on common site datasets. We compared the empirical results to baseline models, such as BPR-MF, NGCF, LightGCN, and WiGCN.
4.1 Datasets description
We conduct experiments on BrightKite, NYC and Gowalla. Each dataset consists of a file of check-in records that include user_id, location_id, and check-in_time; and a friendship relation record file, where each record consists of two user_id who are friends on a social platform. Some additional files can provide more details on the user or location. We provide the statistics of all datasets in Table 3.
Table 3. Statistical of datasets.
• BrightKite: a location-based social networking service provider that allows users to check in and share their locations while also providing ratings and comments. The friendship network has over 50 thousand nodes and over 214 thousand edges. It was collected via their public API. The network was collected using directed relations, but we rebuilt it with undirected edges when there was a friendship in both directions. This dataset is available in the SNAP project [35].
• NYC: The NYC Open Data is a collection of data sets that provide information about different aspects of New York City, such as transportation, education, health, the environment, and culture [36]. New York City agencies and other partners publish this data, which is freely available to the public. The data can be used to analyze and understand the city's operations, as well as to create new applications and services for the public.
• Gowalla: Gowalla is one of several web-based applications that incorporate location-based social networking [35]. Users can check-in and share public locations with their friends. Gowalla's friendship network is represented as an undirected graph.
4.2 Baseline models
We use the following state-of-the-art models as base lines to compare with our proposed model.
• BPR-MF [34]: A personalized ranking model based on a generic optimization criterion, the maximum posterior estimator, derived from a Bayesian problem analysis. The model is equipped with a generic optimization learning algorithm that was created using stochastic gradient descent and bootstrap sampling.
• NGCF [15]: This is one of the most effective GCN models. It propagates embeddings with multiple iterations to capture high-order connectivity in the interaction graph before stacking them on the output. The latent vectors contain the collaborative signal, which leads to higher precision.
• LightGCN [16]: This model focuses on neighborhood aggregation for collaborative filtering using NGCF. It removes weight matrices and activation functions. The users’ and items’ embeddings for the interaction graph are learned using linear propagation. The resulting embedding is the sum of all the learned embeddings.
• WiGCN [28]: The model uses a weighted matrix in the propagation process to strengthen the signals. This matrix represents users' influence on other users by calculating the common items between them. This results in more data collection propagation and improved recommendation performance.
4.3 Evaluation criteria
It is critical to select the appropriate evaluative measure for each algorithm [37]. A common evaluation method is to divide a dataset into a train set (typically containing 80 percent of the data) and a test set (the remaining 20 percent of the data). The algorithm is then applied to the train set to make predictions, which are evaluated on the test set. The difference between the actual data value and the predicted result indicates the accuracy of the experimental model's algorithm. This error can be represented by Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). In addition to MAE and RMSE, precision, recall, scalability, learning time, memory consumption, and interpretability are important to consider when evaluating the recommended system.
In implicit datasets, user check-ins to locations are recorded in binary format. Algorithms use accuracy measures for classification, with precision and recall being the most used metrics [38, 39]. Precision is the ratio of correct predictions on the test set, whereas recall is the algorithm's sensitivity, or the proportion of relational assertions extracted from the test set. (16) is used to calculate precision and recall measurements.
\(\begin{align}Precision=\frac{True \; positive}{True \; positivie + False \; positive}\end{align}\) (16)
\(\begin{align}Recall = \frac{True \; positive}{True \; positive + False \; negative}\end{align}\)
where:
• True positive is set of hits predicting check-in exists of users on locations,
• False positive is set of mispredictions of check-in,
• False negative presents the number of predicted check-ins not being presented on the test set.
Furthermore, the discounted cumulative gain score (DCG) is a method that labeled each result [40]. It accumulates a gain function G applied to the label of each result across the result vector, which is scaled by a discount function D based on the result rank. The DCG is then normalized by dividing it by the DCG of an ideal result vector, I. This yields the normalized discounted cumulative gain (NDGC), as in (17)
\(\begin{align}NDCG@K = \frac{DCG@K}{IDCG@K}\end{align}\) (17)
In our experiments, we use precision and recall measurement in (16) and the NDCG score in (17) to consider 5 and 10 items, respectively.
4.4 Overall result comparison
We have summarized the experimental results in Table 4. Because the NYC dataset lacks friendship data, POI-3 returns no results. We divide the table into two parts: Top-5 and Top-10, with 5 and 10 selected locations for testing, respectively. When matrix information is added, the results improve in precision and recall.
Table 4. Overall performance comparisons of LBRS.
4.5 Ablation studies
We repeat the experiments with different parameters to determine the impact of each component on system results. This also helps us explain the machine learning model more clearly.
4.5.1 The influence of each matrix
To assess the impact of these factors on the model, we created three variations, which are detailed below and summarized in Table 5.
Table 5. Overall performance comparisons of LBRS.
• POI-1: Based on the collaborative filtering model using GCN, we add to the model one of the correlation matrices between locations calculated based on the geographical distance information of each pair of locations by Haversine equation (1), or the correlation of locations based on CF with Jaccard index (2).
• POI-2: We incorporate into our proposed model a combination of location correlation based on CF results (information from each pair of locations with the same number of check-in users) and location correlation by geographical distance.
• POI-3: We incorporate the social link matrix derived from online social networking platforms into our POI-2 model.
4.5.2 Computational speed
On e-commerce platforms, the calculation speed of the recommendation model is critical. A model must not only have precision and recall, but also converge after a limited number of epochs. In previous GCN models, precision and recall will increase gradually because iterations take time to propagate through the Laplacian form matrix A. In the model we proposed, additional information matrices aided the embeddings in receiving signals and rapidly reaching states specific to users and locations. We show the increase in precision and recall after the number of epochs of three models: LightGCN, POI-1, and POI-3 (or POI-2 if social links are not available) on the NYC dataset in Fig. 5 and the BrightKite dataset in Fig. 6.
Fig. 5. Values of the recall and precision measurement after number of epochs on NYC dataset.
Fig. 6. Values the recall and precision measurement after number of epochs on BrightKite dataset.
4.5.3 The size of embedding
We tested our POI-2 model on the NYC dataset with sizes of 32, 64, 96, 128, and 256. The accuracy trade-off is the size of the memory used to store the matrices during the calculation. We conclude that 64 is an appropriate size for the datasets in this publication. When the original matrices are sparse with implicit binary data, the embedding size has little effect on experiments with other base line models. In Fig. 7, we show the difference in recall and precision values for different embedding sizes.
Fig. 7. Value of the recall and precision with several size of embedding on POI-3 model with BrightKite dataset.
4.5.4 Number of layers in GCN
The advantage of GCN models over GNNs is that they repeat the signal propagation process multiple times, which we refer to as the number of layers in the proposed model. The weights and embeddings in the model are reused at each iteration, increasing their accuracy. However, if repetition occurs too frequently, overfitting can occur, and the model's accuracy decreases. We ran tests on the NYC and BrightKite datasets using layer numbers 1, 2, 3, and 4. Fig. 8 shows the accuracy of each layer for the NYC dataset using three models: NGCF, LightGCN, and POI-2.
Fig. 8. Comparison the precision measurement after each propagation iteration on NYC dataset.
With three layers, the model achieves the highest precision. When there are four layers, accuracy tends to decrease. This conclusion is also consistent with the discussion in the publication [16] using the model LightGCN. The majority of GCN models in publications use 3-layer parameters [15, 17, 29].
5. Conclusion
Experimental results show that correlations from both the user and item sides contribute to feature embeddings and communicate with one another during the propagation process. Furthermore, the proposed model's input information blocks are designed as modules, making the model easily customizable and explainable. Mining geographical distances is becoming popular in recommendation systems as users move around and use smartphones equipped with GPS positioning and continuous tracking. However, the issue of location recommendation requires further investigation because, unlike traditional recommendations, the location a user wishes to visit is very closely related to the chain of recently visited locations. Instead of treating all locations equally, the sequential recommendation system will evaluate the model based on the time order in which a user visits each location. The group recommendation problem is also difficult because it will provide recommendations to many users when they visit as a group of friends.
References
- R. Mehta and K. Rana, "A review on matrix factorization techniques in recommender systems," in Proc. of 2017 2nd International Conference on Communication Systems, Computing and IT Applications (CSCITA), pp.269-274, 2017.
- Xun Zhou, Jing He, Guangyan Huang, Yanchun Zhang, "SVD-based incremental approaches for recommender systems," Journal of Computer and System Sciences, vol.81, no.4, pp.717-733, 2015. https://doi.org/10.1016/j.jcss.2014.11.016
- Ma, H., Zhou, D., Liu, C., Lyu, M. R. & King, I., "Recommender systems with social regularization," in Proc. of WSDM '11: Proceedings of the fourth ACM international conference on Web search and data mining, pp.287-296, 2011.
- Guo, G., Zhang, J. & Yorke-Smith, N., "A Novel Recommendation Model Regularized with User Trust and Item Ratings," IEEE Transactions on Knowledge And Data Engineering, vol.28, no.7, pp.1607-1620, 2016. https://doi.org/10.1109/TKDE.2016.2528249
- Jiang, M., Cui, P., Wang, F., Zhu, W. & Yang, S., "Scalable Recommendation with Social Contextual Information," IEEE Transactions on Knowledge and Data Engineering, vol.26, no.11, pp.2789-2802, 2014. https://doi.org/10.1109/TKDE.2014.2300487
- Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E. & Fowler, J. H., "A 61-million-person experiment in social influence and political mobilization," Nature., vol.489, pp.295-298, 2012. https://doi.org/10.1038/nature11421
- Qiu, J., Tang, J., Ma, H., Dong, Y., Wang, K. & Tang, J., "DeepInf: Modeling Influence Locality in Large Social Networks," 2018.
- Wang, J., Bagul, D. & Srihari, S., "ContextMF : A Fast and Context-aware Embedding Learning Method for Recommendation Systems," 2018.
- Koren, Y., "Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model," in Proc. of KDD '08: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.426-434, 2008.
- Fan, W., Ma, Y., Li, Q., He, Y., Zhao, E., Tang, J., & Yin, D., "Graph Neural Networks for Social Recommendation," in Proc. of WWW '19: The World Wide Web Conference, pp.417-426, 2019.
- Liao, J., Zhou, W., Luo, F., Wen, J., Gao, M., Li, X. & Zeng, J., "SocialLGN: Light graph convolution network for social recommendation," Information Sciences, vol.589, pp.595-607, 2022. https://doi.org/10.1016/j.ins.2022.01.001
- Yu, J., Yin, H., Gao, M., Xia, X., Zhang, X. & Hung, N. Q. V., "Socially-Aware Self-Supervised Tri-Training for Recommendation," in Proc. of KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp.2084-2092, 2021.
- Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W. L. & Leskovec, J., "Graph Convolutional Neural Networks for Web-Scale Recommender Systems," in Proc. of KDD '18: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.974-983, 2018.
- Hamilton, W. L., Ying, R. & Leskovec, J., "Inductive Representation Learning on Large Graphs," in Proc. of NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp.1025-1035, 2017.
- Wang, X., He, X., Wang, M., Feng, F. & Chua, T., "Neural Graph Collaborative Filtering," in Proc. of SIGIR'19: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.165-174, 2019.
- He, X., Deng, K., Wang, X., Li, Y., Zhang, Y. & Wang, M., "LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation," in Proc. of SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.639-648, 2020.
- T. T. Tran, V. Snasel and L. T. Nguyen, "Combining Social Relations and Interaction Data in Recommender System With Graph Convolution Collaborative Filtering," IEEE Access, vol.11, pp.139759-139770, 2023. https://doi.org/10.1109/ACCESS.2023.3340209
- Su, X. & Khoshgoftaar, T. M., "A Survey of Collaborative Filtering Techniques," Advances in Artificial Intelligence, Oct. 2009.
- Yang, C., Bai, L., Zhang, C., Yuan, Q. & Han, J., "Bridging Collaborative Filtering and SemiSupervised Learning: A Neural Approach for POI Recommendation," in Proc. of KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.1245-1254, 2017.
- Wang, W., Chen, J., Wang, J., Chen, J., Liu, J. & Gong, Z., "Trust-Enhanced Collaborative Filtering for Personalized Point of Interests Recommendation," IEEE Transactions on Industrial Informatics, vol.16, no.9, pp.6124-6132, 2020. https://doi.org/10.1109/TII.2019.2958696
- Chamberlain, B. P., Hardwick, S. R., Wardrope, D. R., Dzogang, F., Daolio, F. & Vargas, S., "Scalable Hyperbolic Recommender Systems," arXiv:1902.08648, 2019.
- Zhao, S., Zhao, T., King, I. & Lyu, M. R., "Geo-Teaser: Geo-Temporal Sequential Embedding Rank for Point-of-Interest Recommendation," in Proc. of WWW '17 Companion: Proceedings of the 26th International Conference on World Wide Web Companion, pp.153-162, 2017.
- Wang, H., Shen, H., Ouyang, W. & Cheng, X., "Exploiting POI-Specific Geographical Influence for Point-of-Interest Recommendation," in Proc. of IJCAI'18: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp.3877-3883, Jul. 2018.
- Waters, N., Tobler's First Law of Geography. Dec. 2017.
- Lian, D., Ge, Y., Zhang, F., Yuan, N. J., Xie, X., Zhou, T. & Rui, Y., "Scalable Content-Aware Collaborative Filtering for Location Recommendation," IEEE Transactions on Knowledge and Data Engineering, vol.30, no.6, pp.1122-1135, 2018. https://doi.org/10.1109/TKDE.2018.2789445
- Lian, D., Zheng, K., Ge, Y., Cao, L., Chen, E. & Xie, X., "GeoMF++: Scalable Location Recommendation via Joint Geographical Modeling and Matrix Factorization," ACM Transactions on Information Systems (TOIS), vol.36, no.3, Mar. 2018.
- Lian, D., Wu, Y., Ge, Y., Xie, X. & Chen, E., "Geography-Aware Sequential Location Recommendation," in Proc. of KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.2009-2019, Aug. 2020.
- Tran, T. T. & Snasel, V., "Improvement Graph Convolution Collaborative Filtering with Weighted Addition Input," in Proc. of 14th Asian Conference, Intelligent Information and Database Systems, Springer, pp.635-647, 2022.
- Loc Tan Nguyen, Tin T. Tran, "CombiGCN: An effective GCN model for Recommender System," in Proc. of 12th International Conference on Computational Data and Social Networks, pp.111- 119, 2023.
- Li, J., Song, Y., Song, X., & Wipf, D., "On the Initialization of Graph Neural Networks," in Proc. of the 40th International Conference on Machine Learning, vol.202, pp.19911-19931, 2023.
- Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K. & Jegelka, S., "Representation Learning on Graphs with Jumping Knowledge Networks," in Proc. of the 35th International Conference On Machine Learning, vol.80, pp.5453-5462, 2018.
- Hamilton, W. L., Ying, R. & Leskovec, J., "Inductive Representation Learning on Large Graphs," in Proc. of NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp.1025-1035, 2017.
- Maas, A. L., Hannun, A. Y., Ng, A. Y., "Rectifier Nonlinearities Improve Neural Network Acoustic Models," in Proc. of 30th International Conference on Machine Learning (ICML), vol.28, 2013.
- Rendle, S., Freudenthaler, C., Gantner, Z. & Schmidt-Thieme, L., "BPR: Bayesian Personalized Ranking from Implicit Feedback," in Proc. of UAI '09: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pp.452-461, 2009.
- Cho, E., Myers, S. A. & Leskovec, J., "Friendship and Mobility: User Movement in LocationBased Social Networks," in Proc. of KDD '11: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.1082-1090, 2011.
- NYC Open Data. [Dataset]. New York City, 2023. https://opendata.cityofnewyork.us/
- Herlocker, J. L., Konstan, J. A., Terveen, L. G. & Riedl, J. T., "Evaluating Collaborative Filtering Recommender Systems," ACM Transactions on Information Systems (TOIS), vol.22, no.1, pp.5-53, 2004. https://doi.org/10.1145/963770.963772
- Sarwar, B., Karypis, G., Konstan, J. & Riedl, J., "Application of Dimensionality Reduction in Recommender System -- A Case Study," in Proc. of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2000.
- Sarwar, B., Karypis, G., Konstan, J. & Riedl, J., "Analysis of Recommendation Algorithms for ECommerce," in Proc. of EC '00: Proceedings of the 2nd ACM conference on Electronic commerce, pp.158-167, 2000.
- Najork, M. & McSherry, F, "Computing Information Retrieval Performance Measures Efficiently in the Presence of Tied Scores," in Proc. of 30th European Conference on IR Research (ECIR), Apr. 2008.