1. Introduction
Online tourism destination searching can be a daunting task on account of the immense amount of online information. Therefore, a recommender system can help users contend with information overload and provide personalized recommendations, content, and services to them [1]. The Internet has surpassed person-to-person communication as the primary medium for choosing destinations. Research by the Travel Industry Association of America determined that approximately two-thirds (64%) of online travelers use search engines for travel planning [2]. In China, a survey by iResearch shows that more than 16.9% of mobile users employ online tourism services. Destination recommendations for travel are inherently difficult to obtain because users rarely rate more than a small number of destinations; consequently, comprehensive profiles cannot be built [3]. Moreover, like other recommender systems, tourism destination recommendations are affected by the cold start problem.
To solve the cold start problem, many researchers have provided useful contributions based on collaborative filtering (CF) recommendations. Similarity is used as a weight to predict the ratings of target items for target users; therefore, the measure of similarity is crucial for recommendation accuracy in the recommender systems. Several similarity measures are commonly used in CF, including the cosine (COS), Pearson’s correlation (COR), and Spearman’s rank correlation (SRC) measures. However, these approaches are limited to being applied to cold-start situations in which only a few ratings are available for similarity calculations [4]. Hyeong-Joon and Kwang-Seok [5] proposed a personalized program recommender for smart TVs using memory-based CF with a similarity method known as raw moment-based similarity. This approach can improve the recommendation performance for smart TVs. Ahn [6] presented a heuristic similarity measure named PIP, which focuses on improving the recommendation performance under cold-start conditions. Lim [7] presented a real time recommender system which employed the rich contextual information such as shopping time, location and purchase pattern to provide optimal shopping route and improve the recommendation accuracy. Another idea is to take advantage of the trust relationship between users instead of the user rating similarity. Chen et al. [8] proposed a cold start recommendation method for the new user. The method integrates a user model with trust and distrust networks to identify trustworthy users. Yang et al. [9] proposed the concept of an inferred trust circle based on social networks to recommend user favorite items. Jamali and Ester[10]incorporated the mechanism of trust propagation into the recommendation model. Ma et al. [11] employed trusted friend opinions in the social trust network to make recommendations for users. This approach helps alleviate the data sparsity problem and potentially increases prediction accuracy.
However, in user-based CF with data sparsity, recommendation accuracy is low regardless of whether similarity measures or trust measures are used. Therefore, another type of CF method—item-based CF—has been proposed. It employs similarities among items instead of among users. Linden et al. [12] introduced the application of item-based CF in Amazon.com. This method requires a subsecond processing time to generate online recommendations for all users regardless of the number of ratings. Pereira et al. [13] introduced an architecture for item-to-item recommendations. This architecture employs a commercial cloud computing platform to provide scalability, reduce costs, and, most importantly, reduce response times. Ana et al. [14] integrated content-based and item-based CF techniques and provided an efficient method to solve the sparsity and scalability problems. Deshpande et al. [15] proposed a measure for similarities among various items and used them to identify the set of items to be recommended. Although these item-based CF methods have the advantage of processing large-scale datasets, they cannot efficiently adapt to the dynamic changes of user preferences.
Recent researchers have begun to employ opinion mining techniques because a massive amount of additional information may exist that could be extracted from text reviews rather than using low-rank ratings [3,16-20]. Priyanka et al. [16] proposed a new recommender system to help user to purchase books which uses the combined aspects of classification and opinion mining techniques. Armentano et al. [17] build a good classification model which is able to determine the sentiment expressed in the short texts published in Twitter, and employed the results of extracted sentiment to help users to decide to see the movie or not. Tewari et al. [18] presented a e-learning recommender system which can analyze the learner’s opinions about the subject contents and recommends the teachers to change the particular portion of the difficult subject topic. These studies all make use of opinion mining to realize recommendations. However, they only analyzed the polarity (positive, neutral or negative) of review text, but did not give the quantitative computation of the sentiment. Furthermore, these studies did not integrate the reviewer’s ratings with text reviews to improve the recommendation accuracy. With an increasing number of common users becoming adept with the Web, more users are writing text reviews for tourism destinations. As a result, the number of text reviews that a destination receives has rapidly grown. Therefore, we can obtain user preferences and identify features of tourism destinations by opinion mining and carry on the quantification to the text reviews. Moreover, we can use the text reviews to alleviate the effects of data sparsity. In this paper, we therefore propose a tourism destination recommender system to address the cold start problem and perform ratings prediction. This method embeds an opinion mining module; then, by mining the text reviews, user preferences and tourism destination ratings are obtained. These results are fused into a user- and item-based CF recommender system. The proposed system not only supplements the data by opinion mining, but it can also handle large-scale datasets for off-line processing.
The remainder of the paper is organized as follows. We describe related works and several typical recommendation models in Section 2. In Section 3, we give a detailed introduction of the full framework of our recommender system, opinion mining module and recommendation algorithm. In Section 4, we present a performance comparison of our recommender system with several well-known recommender models and provide the evaluation results. The paper is concluded with a summary and a discussion on future research in Section 5.
2. Related Work
In this section, we review the selection techniques used in recommender systems. We then consider several recently proposed recommendation methods that aim to address the cold start problem.
2.1 User-based Collaborative Filtering Recommendations
Recommender systems suggest items of interest to users based on their explicit and implicit preferences, the preferences of other users, and user and item attributes [14]. Two widely used recommendation approaches are content-based systems and CF systems. Content-based recommender systems mainly select highly similar items with respect to user preferences [21,22]. However, CF technology is a more efficient approach in recommender systems [12,15,23-26]. In this study, we focus on CF techniques, which are classified into respective user- and item-based CF approaches. Given the historical item ratings of a user, the user-based CF predicts the item ratings of an unrated item, in, from user um as follows:
where and rmn denote the predicted rating and real rating for item in from user um, denotes user um average ratings, and Nm is a set of reference users whose preferences are similar to user um. Sim(um,uj) indicates the similarity between the preferences of user um and uj.
Owing to the prevalence of social computing, many online social networks allow users to add a trusted users list. Accordingly, social trust, which indicates trust relationships among users, can be derived by aggregating the trust lists, and the similarity weight can thus be replaced by the trust weight in Eq. (1). The recommendations can be improved as shown in the following formulation:
where denotes the set of users trusted by user um, and Tmj represents the trust degree of um for uj.
Despite the above advances, a major problem with user-based CF approaches is lack of scalability [27]. To identify similar users or trusted users as reference users, user-based CF must record the item ratings of all users and scan the database. Owing to the high costs incurred by database scan computations, the time performances of user-based CF approaches are limited.
2.2 Item-based Collaborative Filtering Recommendations
Unlike user-based CF, item-based CF calculates the similarity between two items using their co-ratings. Item-based CF is founded on the concept of identifying a set of items that is most similar to the item in question for any given item. Similarity is measured using a combination of input data, which are generally structured in a bidimensional matrix. In this matrix, the first dimension represents users; the second dimension represents items [28]. The ultimate goals of an item-based CF recommender system are to predict how users will rate a currently unrated item, and to recommend items from the collection. Given the historical item ratings, item-based CF predicts the item rating of unrated item In from user um as follows:
where denotes the predicted item’s rating, and Mn is a set of reference items for which the co-ratings are similar to item In. Sim(In,Ij) represents the similarity between items In and Ij.
2.3 Tourism Recommender Systems
Information about tourism destinations and their associated resources, such as accommodations, restaurants, traffic conditions or events, among others, is commonly searched for tourists in order to plan a trip [29]. In the tourism field, travel recommender systems aim to match the characteristics of tourism and leisure resources or attractions with the user needs [30]. In many cases recommender systems not only take into account the preferences of the tourist but they also analyze the dynamic context of the trip which can include aspects like the tourist’s location, scenery quality or cost of living [31]. Over the last decade, many researchers have studied tourism recommender systems, such as tourist attractions recommendation, path planning recommendation for tourism, food recommendation, hotel recommendation, and so on [32]. In this paper, we focuse on tourism destination recommendation that employ opinion mining techniques and perform ratings prediction under the condition of data sparsity.
2.4 Cold Start Problems
Providing effective cold-start recommendations is of fundamental importance to a recommender system[1]. The cold-start problem occurs when it is not possible to make reliable recommendations due to an initial lack of ratings. This problem can significantly degrade CF performance[8]. In this paper, we distinguish two kinds of cold-start problems: no ratings and few ratings. The kind of no ratings includes new item and new user problem. The new item problem arises due to the fact that these new items were added to recommender system without initial ratings and text reviews, and therefore, they are not likely to be recommended. New user problem is the same. The last kind refers to the difficulty in obtaining a sufficent amount of user ratings to support reliable recommendations. Compared with other items’ datasets, there are not enough ratings in tourism destination recommendations, effective prediction of ratings from a small number of examples is important and it is the one covered in this paper.
3. Design of Proposed Recommendation Model
3.1 Recommendation Frameworks
Social media was initiated as a space where anyone with an account can interact with any other user, share content, express personal views, etc. [33]. As a type of social media, online tourism review sites provide increasingly important sources of information in tourism destination choices [34]. Users have been increasingly sharing experiences with other users through person-to-person electronic communication. In tourism, third-party review sites, such as Trip Advisor (www.tripadvisor.com), Venere (www.venere.com), and Tongchen (www.ly.com), enable travelers to comment on tourism destinations they have experienced. These user-generated online reviews now routinely inform and influence individual travel purchase decisions [35].
In this section, we introduce the framework of our recommender system, which is illustrated in Fig. 1. As shown in the figure, the entire recommendation process can be divided into four steps. Firstly, we extract from a text review database six tourism destination factors that are most important to users. These six main factors are scenery quality, cost of living, infrastructure, traffic conditions, accommodations, and travel sentiments. These are consistent with the factors proposed by Seddighi and Theocharous [36]. Secondly, we use opinion mining to build the profiles of user preferences and item opinion reputations. In the third step, our recommender system sets up a user interaction module. In this step, users may input or select their travel motivations, such as intent, travel time, preferences, and so on. Thus, we can filter a considerable amount of unrelated tourism destination information based on the input data. In the final step, the recommendation algorithm employs the user profile, item profile, user interaction module, and historical user ratings to predict the ratings from a user to a tourism destination. In short, our recommendation algorithm framework provides versatility because it can adapt to many kinds of item recommendations, such as hotels, dining, commodities, and so on.
Fig. 1.Recommendation framework
3.2 Opinion Mining
The main idea of our algorithm for opinion mining is to calculate the reputations of items and to extract user preferences. At the end of the process, each item and each user are assigned a score from the text review dataset. In this section, we study the problem of generating feature-based summaries of user reviews of tourism destinations. Given a set of user text reviews, the task includes three subtasks: (1) identifying features of the tourism destinations on which users have expressed their opinions; (2) identifying review sentences that give positive or negative opinions for each feature; (3) calculating the reputations of items and extracting the user preferences.
3.2.1 Factor Profiling
We crawled 985,683 reviews from 312,896 users for 5,722 tourist destinations from January 2012 to December 2014. The crawled data structure is shown in Table 1. Owing to the limited content of reviews, we selected six main factors that influenced the tourism destination choices. These factors were based on the Chinese keyword extraction method proposed by Xu[37]. We then used notation fk (k = 1,2,⋯,6) to represent the factors. In Table 2, we outline the factors, feature words, and opinion words. The second column of Table 2 lists the given factor’s feature words that more frequently appear in written text but are not common for other factors. Opinion words are those that people use to express positive or negative opinions; specific opinion words are attached to the corresponding feature words. This approach can help with inferring the opinion polarity. We mathematically represent the item opinion reputations and user preferences in the following section.
Table 1.Data structure of reviews
Table 2.Results of keyword extraction algorithm for Chinese text
3.2.2 Item Opinion Reputations and User Preferences
Item reputation and user preference are the key basis for user to choose the items. In this section, we use a beta probability density function to represent item opinion reputation probability distributions of binary events. The item opinion reputations and user preferences can be used as separate modules, which can be individually added to the recommender system.
3.2.2.1 Item Opinion Reputations
Many researchers have realized that review texts can improve predicted accuracies [33,38,39]. Bernabé-Moreno et al. [33] built a model that measures the impact of social media interactions. Bao et al. [38] proposed a matrix factorization model called TopicMF, which simultaneously considers the ratings and corresponding review texts. However, they do not make a specific mathematical description of review text extraction, nor do they mention how to measure the extraction results. We noted above that each tourism destination opinion is simultaneously classified into six factors, regardless of whether it is positive or negative. Each user rating of a tourism destination is deemed an independent event, and each user opinion is regarded as an independent binary event. Therefore, we can use beta probability distribution to represent the event [34].
The main reasons for using beta probability distribution are the following. Firstly, through opinion mining, the review texts decompose many times the value of 1 (positive value) and -1 (negative value) in six factors. It therefore conforms to the beta probability distribution. We can thus build an opinion reputation module using beta probability distribution, which is firmly based in the theory of statistics. Secondly, the opinion reputation module is used in a centralized approach and can also be applied in a distributed setting. Finally, we can scale the item reputation rating to be in the range [-1,+1] instead of [0,1], where the 0 middle value represents a neutral rating. It therefore fulfills most users’ customary expressions. This leads to the following definition of the reputation function, the beta distribution, which is defined as follows:
where 0 ≤ p ≤ 1, α > 0, β > 0, and the probability expectation value of the beta distribution is given by:
We replace the parameters (α,β) in Eq. (4) with (a,b), let a be the observed number of positive opinions, and let b represent the observed number of negative opinion, where α = a + 1, β = b + 1. Then, the probability prior density function before observation can be expressed by setting a = 0, b = 0 . This leads to the following definition of the opinion reputation:
Definition 1 (Opinion Reputation Function on Factors): Let respectively represent the amount of positive and negative opinions about tourism destination dn by the reviewers based on factors fk. Then, the opinion reputation function of the factors is defined as:
where For simplicity and conciseness of notation, we can write instead of Then, according to Eq. (5), the probability expectation value of the opinion reputation function can be expressed:
Definition 2 (Opinion Reputation of Factors): Let respectively represent the amount of positive and negative opinions about tourism destination dn by the reviewers based on factors fk. Then, according to Eq. (5), the opinion reputation of the factors is defined as:
where For simplicity and conciseness of notation, we can write instead of
Using the above lexicon, we assign an opinion reputation rating to each tourism destination with the given six factors. Thus, we build an opinion reputation rating matrix to represent trust or distrust of the tourism destination by all reviewing users.
3.2.2.2 User Preferences
Through statistics of text reviews, we can additionally determine what factors users predominantly focus on in terms of tourism destination. For each user, we can count the number of feature words for each factor; that is, the frequency of the feature word can, to some extent, reflect the user’s preference. Otherwise, the importance of the feature word in the tourism destination is a crucial measure for the user’s preference. In this paper, we use the classical formula of term frequency - inverse document frequency (TF-IDF) [40] to define the factor weighting. TF-IDF evolved from inverse document frequency (IDF). The latter was proposed by Jones [41] with the heuristic intuition that a query term that occurs in many documents is not a good discriminator and should be given less weight than one that occurs in few documents. For example, if a user has commented on a tourism destination with feature words that appear in few destination reviews, we can conclude that the user is more concerned about this given factor, which includes this respective feature word. Therefore, we use the TF-IDF method to measure the weight of feature words in tourism destination reviews. The weight of the feature word is defined as follows.
Definition 3 (Feature Word Frequency): Assume that there is a feature word, , in tourism destination dn text reviews. represents the number of times feature word appears in tourism destination reviews Opn. The feature word frequency can be represented by the following equation:
Let be the other feature word set in the tourism destination reviews Opn on factor fk. Then, which represents the maximum number of times occurs in the Opn, excluding feature word on factor fk.
Definition 4 (Inverse Feature Word Frequency): Not all feature words in the opinion lexicon are equally important. The inverse feature word frequency seeks to scale down the weighting of a feature word that occurs in many tourism destination reviews. Let N be the number of recommended tourism destinations. Let be the number of tourism destination reviews in which feature word occurs. Then, the inverse feature word frequency is defined as:
The idea of is that the feature word that appears in many tourism destination reviews cannot effectively distinguish the user’s preference. It should therefore be given more weight to the feature word that appears in few tourism destination reviews.
Definition 5 (User Preference): In this paper, the user preference represents the degree of user interest in given factors. Let Nk be the total number of feature words on factors fk in the opinion lexicon. Let be the number of times all feature words belonging to factor fk occur in tourism destination reviews Opmn. Then, We assume that the appearance of every feature word in Opmn is an independent probability event. Accordingly, we have The user preference in this paper is defined as:
Eq. (11) indicates that the user preference is comprised of two parts. The first part is the feature word probability of the occurrence in the text reviews written by the user. The second part is the importance of the feature word in the text reviews of the tourism destination. In the following section, we introduce the recommendation algorithm using opinion reputations for items and user preferences.
3.3 Proposed Recommendation Algorithm
The most critical component of the CF recommender system is effectively finding similarities among users [6]. Compared with user similarity, item similarity is more stable. Moreover, item similarity processing cannot affect the prediction time efficiency because it is performed off-line [13]. In this section, we present two types of similarity: user preference and item opinion reputation.
3.3.1 User Preference Similarity and Item Opinion Reputation Similarity
According to Eq. (11), the user preference similarity is defined as follows using the cosine measure:
Similar to Eq. (12), the similarity of the opinion reputation rating for the tourism destination can also be defined as follows:
In fact, through opinion mining, we can obtain both the user preference similarity and opinion reputation similarity as long as the users or the destinations have reviews. Therefore, in the case of rare ratings, we can extract the similarities of user preferences and destination opinion reputations for recommendations.
3.3.2 Rating Prediction and Recommendation
Once the user preference similarity is prepared, the prediction for tourism destination dn by user ui can be calculated as follows for user-based CF methods:
where m' is the number of other users who have also rated destination dj.
Compared with the user preference similarity, the opinion reputation similarity for the destination is more stable. Prediction of a rating for tourism destination dn by user ui can be calculated as follows for item-based CF methods:
where n' is the number of destinations that were rated by user ui.
According to Eqs. (14) and (15), integrating user-based and item-based CF methods, a hybrid CF method is presented as follows:
The predicted value of hybrid CF methods is constituted by Eqs. (14) and (15), and the proportions of the predicted value are determined by m' and n'. That is to say, according to the number of tourism destinations that were rated by users, and according to the number of users who rated the predicted destination, the predicted value can compensate for the limitations of insufficient rating numbers. Therefore, the recommendation accuracy can be improved under a cold start. When n' is equal to 0, then Eq. (16) degenerates into user-based CF, such as in Eq. (14). Likewise, when m' is equal to 0, this method degenerates into item-based CF, such as in Eq. (15).
3.3.3 Algorithm Analysis
Our hybrid CF algorithm is presented in Algorithm 1 which can predict the rating from user ui to tourism destination dj. Because the main computation of Algorithm 1 is to caculate the user preference and item opinion repuation similarity between users and items, the computational complexity of this algorithm is O(K(m' + n')) . The value of m' and n' must be obtained via scan rating matrix. Therefore, the computational complexity can be expressed as O(K(M + N) ω), where ω is the sparsity degree of the user rating matrix R . Additionally, the storage costs of Algorithm 1 is O(K(M + N) + MN), and each recommendation only needs once communication request for getting the user preference and item opinion repuation similarity from the recommender server.
Additionally, the ratings and text reviews for tourism destinations from users are processed off-line. The user preference similarity and opinion reputation similarity are both calculated off-line by the recommender server. If the user needs recommendation services, the recommender system calls the off-line similarity matrix and carries out the real-time recommendation. This process is shown in Fig. 2.
Fig. 2.Recommendation processing
4. Evaluation
In this section, we describe the experiments that we conducted to evaluate the performance of our proposed recommender system. We describe our experimental setup, including information about the datasets, performance measurements, recommendation algorithms for a comparative study, and parameter settings. We then present the experimental results.
4.1 Experiment Setup
4.1.1 Evaluation Dataset
The dataset used in our study was extracted from Tongcheng, the well-known Chinese tourist social network. For each tourism destination, the data contains a general overview of the destination (e.g, name, address, price, average rating, scenic type, scenic ranking, etc.) and a list of reviews written by visitors, including a rating (1 to 5 stars) and a text review. The dataset includes 985,683 reviews by 312,896 visitors to 5,722 destinations. These reviews were written between January 2012 and December 2014. To ensure experiment fairness of the, in our paper, we use leave-one-out cross validation for the items, and put 20% of users as the validation users and the remaining 80% as the training users, because most of the comparative methods perform 20-fold cross validation.
4.1.2 Performance Metrics
Tourism data is more sparse than other data, and cold start problem is more serious. In this paper, the number of recommended items is small, rank accuracy metrics is too sensitive to our recommendation model and it is possible to cause instability and deviation of performance metrics. Therefore, we only focus on predictive accuracy and classification accuracy in this paper. The performance measures used in our experiments were mean absolute error (MAE), root mean square error (RMSE), coverage, and F1. These are the most popular measures for cold start metrics and are defined below.
(1) MAE and RMSE: MAE and RMSE constitute all the predictive accuracy metrics used that measure how close the recommender system’s predicted ratings are to the true user ratings. The lower the values of MAE and RMSE are, the more effective is the performance of the proposed approach.
where Ttest is the training dataset, rij is the real rating of tourism destination dj from user ui, and is the corresponding predicted rating.
(2) Coverage: The coverage rate is very important because the MAE cannot reflect the real utility of a recommendation method; it only makes sense in conjunction with the MAE. The coverage rate is defined as:
where Utest is the user set in the training dataset, D is the destination set, and function returns 1 if can be derived from the reference users; otherwise, it returns 0.
(3) F1: F1 is commonly used to evaluate the overall effectiveness of a recommender system which is the harmonic mean of precision and recall. To compute the F1, we must transform the item ratings into a binary scale (i.e., relevant and non-relevant) by converting each rating of 4 or 5 to “relevant” and all ratings of 1 to 3 to “not-relevant” [42]. Thus, the recommended items whose predicted ratings are larger than 3 that are relevant, the rest is not-relevant. According to the results of Algorithm 1, we recommend the top N destinations to user who is selected randomly in the training datasets. If the user’s real rating greater than 3, then the recommended destinations will be proved to be successful recommendation, otherwise it is failed. In this study, we select m users randomly, then we recommend top N destinations to each of these users and calculate the mean value of F1 metric. The value of N and m are set to 10 and 100 by default in this paper.
4.1.3 Comparative Cold Start Recommendation Methods
To verify the performance of our proposed method, we compared it with some of the better-performing algorithm including MJD, a new a new similarity measure based on neural learning [43], as well as a current user cold start metric working just on the users’ ratings matrix :PIP [6], and TopicMF [38], a novel matrix factorization model that simultaneously considers the ratings and accompanied review texts. To ensure the fairness of the experiment, we only placed users in the validation users set who had voted for a maximum of 20 items.
4.2 Experimental Results
4.2.1 Recommendation Accuracy
In this section, we evaluate the performance of different methods in terms of the metrics proposed in Section 4.1.2 . Fig. 3 shows the results using the Tongcheng dataset in terms of the experimental setup.
Fig. 3.Performance comparison with the different cold start recommendation methods
The graphs in Figs. 3(a) and (b) validate our method in terms of the hybrid CF recommendation improving the accuracy compared with the single conventional similarity measures when they were applied to cold start users.
Owing to the opinion mining, the text review information was fused into our recommendation model. The performances of our method on MAE and RMSE measures decreased more than 5% compared with PIP, MJD and TopicMF. As shown in Fig. 3(c), the PIP method is the best on coverage measure, whereas our method is second only to PIP. Because PIP is a new heuristic similarity measure that is composed of three factors of similarity—proximity, impact, and popularity—it considers more factors, unlike other methods that focus only on ratings. Thus, it has better performance on coverage, which represents the recommendation variety. This result shows that our method not only provides recommendation accuracy, but it also provides item-recommending breadth. This is because the item-based CF is one part of our model.
In terms of the quality of the recommendations measured with F1, Fig. 3(d) shows the results for the different methods. Because of the special nature of tourism destination recommendations—such as the user’s economic condition, age, location, and so on—the values of F1 were lower than the values calculated for MovieLens and Netflix, which were presented in [43]. However, our method showed a higher recommendation quality for the recommendation methods.
4.2.2 Experiment on the Artificial Interactive Module
In Section 3, we embedded an artificial interactive module into our recommendation model. In this step, we enabled users to choose the factors on the tourism destination of their choice. We then sorted the factors in descending order. Thus, we allocated weight to the user preferences, and the user preference similarity was changed as follows:
where wk is the weight of the user preference on factor fk, and x ∈ {0,1,2,3,⋯}, x is a non-negative integer. In addition, order(k) represents the factor order in which users made their selections in the artificial interactive module. Next, we introduce the x impact on the different metrics described in Section4.1.2.
As shown in Figs. 4 (a) to (d), it is evident that our method shows better recommendation accuracy and quality in the experimental dataset when x = 1 . Figs. 4(c) and (d) show that the recommendation method performance decreased when x increased and the values of the metrics fluctuated and became volatile. In fact, when x = 0, the user preference similarity in our model degenerated into Eq. (12).
Fig. 4.Value of x influence on recommendation performance
4.2.3 Performance Analysis
To precisely show the performance of each method, we placed the maximum and minimum values for each metric in Table 3. The top row in each cell is the maximum; the bottom row in each cell is the minimum. The value with a bold italic underlined font is best for all metrics. We performed 20 groups of data experiments with set x from 1 to 20. We list only the top three experimental results in Table 3. Our method showed better performance except for coverage when parameter x = 1 ; that is, the tradeoff parameters set on the six factors were and x = 1 was an empirical value from the extensive statistical experiments.
Table 3.Performance results of the compared methods
4.2.4 Statistical Experiments
Next, we implemented our system and made it publicly available on the Web. We asked many students, friends, and colleagues to test it. The user inputted the searching information: intent, location, travel cost, rank order on six factors, and price range. Then, we presented the user with a list of five tourism destinations: some were from our system, while some were from the recommended results from other comparative recommendation models. To avoid biasing the user, these five tourism destinations were presented in random order; thus, the user was unaware of the source of the recommendations. The users were requested to rate all the recommended tourism destinations in the range of [1,5] to indicate whether the recommendations met their searching criteria and satisfied them.
To further validate the performance of our method, we selected the best performance of the first four methods: PIP, MJD, and TopicMF. To examine this in more detail than just averages, we plotted in Fig. 5(a) the statistical histogram of the rating percentage given by our users. The tourism destination recommended by our method received more 4 and 5 ratings than by the other methods. Moreover, our recommended destination likewise received fewer 1 and 2 ratings than the other methods.
Fig. 5.Rating percentage by comparative method
In addition, we divided our method into two parts: “Text” and “Rating.” “Text” represents our method’s use of only the review text to recommend destinations. “Rating” represents our method’s use of the review ratings to recommend destinations. “Text+Rating” represents the combination of the two types of data. The results shown in Fig. 5(b) indicate that “Text+Rating” produced the best ratings, “Text” was second, and “Rating” was last. Therefore, it can be inferred that more extraction of user review data can effectively alleviate the cold start problem.
5. Conclusions and Future Works
Exploiting information hidden in social networks to predict user behavior has become very valuable. User text reviews are very useful for mining user preferences and item opinion reputations to improve recommendation accuracy. In this paper, we presented a hybrid collaborative filtering recommender system for tourism destinations. Our method employs opinion mining technology to access user preferences and item opinion reputations. With this information, the user-based collaborative filtering and item-based collaborative filtering are combined into a hybrid collaborative filtering recommendation method. Furthermore, we embedded an artificial interactive module into the tourism destination recommendation model. We conducted extensive experiments on the Tongcheng dataset with different metrics. The experimental results show that, to the best of our knowledge, our method provides a higher recommendation accuracy and quality over existing cold-start recommendation models. We could conclude from the results that opinion mining is sound and effective, the artificial interactive module is successful and hybrid recommendaion model works well in alleviating problems raised by data sparseness.
We intend to explore several directions in future work, including extending the model to handle the diversity of item recommendations. Furthermore, because user preferences are employed in our method and are exploited by opinion mining, we intend to consider and address user privacy preservation and protection.
References
- G. Adomavicius, and A. Tuzhilin, “Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, pp. 734-749, 2005. Article (CrossRef Link). https://doi.org/10.1109/TKDE.2005.99
- Z. Xiang, and U. Gretzel, “Role of social media in online travel information search,” Tourism Management, vol. 31, no. 2, pp. 179-188, 2010. Article (CrossRef Link). https://doi.org/10.1016/j.tourman.2009.02.016
- A. Levi, O. Mokryn, C. Diot, N. Taft, "Finding a needle in a haystack of reviews: cold start context-based hotel recommender system," in Proc. of the 6th ACM Conference on Recommender Systems, Dublin, Ireland, 2012, pp. 115-122. Article (CrossRef Link).
- L. Qi, C. Enhong, X. Hui, C. Ding,C Jian, “Enhancing Collaborative Filtering by User Interest Expansion via Personalized Ranking,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 1, pp. 218-233, 2012. Article (CrossRef Link). https://doi.org/10.1109/TSMCB.2011.2163711
- K. Hyeong-Joon, and H. Kwang-Seok, “Personalized smart TV program recommender based on collaborative filtering and a novel similarity method,” IEEE Transactions on Consumer Electronics, vol. 57, no. 3, pp. 1416-1423, 2011. Article (CrossRef Link). https://doi.org/10.1109/TCE.2011.6018902
- H. J. Ahn, “A new similarity measure for collaborative filtering to alleviate the new user cold-starting problem,” Information Sciences, vol. 178, no. 1, pp. 37-51, 2008. Article (CrossRef Link). https://doi.org/10.1016/j.ins.2007.07.024
- Sang-Min Lim, "Implementation of Intelligent Preferred Goods Recommendation System using Customer Profiles and Interest Measurements based on RFID," Smart Computing Review, vol. 2, no. 3, pp. 185-194, 2012. Article (CrossRef Link).
- C. C. Chen, Y.H. Wan, M.C. Chung, Y.C. Sun, “An effective recommendation method for cold start new users using trust and distrust networks,” Information Sciences, vol. 224, pp. 19-36, 2013. https://doi.org/10.1016/j.ins.2012.10.037
- X. Yang, H. Steck, and Y. Liu, "Circle-based recommendation in online social networks," in Proc. of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China, pp. 1267-1275, 2012. Article (CrossRef Link).
- M. Jamali, and M. Ester, "A matrix factorization technique with trust propagation for recommendation in social networks," in Proc. of the 4th ACM Conference on Recommender Systems, pp. 135-142, 2010. Article (CrossRef Link).
- H. Ma, I. King, and M. R. Lyu, "Learning to recommend with social trust ensemble," in Proc. of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Boston, MA, USA, pp. 203-210, 2009. Article (CrossRef Link).
- G. Linden, B. Smith, and J. York, “Amazon.com recommendations: item-to-item collaborative filtering,” Internet Computing, IEEE, vol. 7, no. 1, pp. 76-80, 2003. Article (CrossRef Link). https://doi.org/10.1109/MIC.2003.1167344
- R. Pereira, H. Lopes, K. Breitman, V. Mundim,W. Peixoto, “Cloud based real-time collaborative filtering for item–item recommendations,” Computers in Industry, vol. 65, no. 2, pp. 279-290, 2014. Article (CrossRef Link). https://doi.org/10.1016/j.compind.2013.11.005
- A. B. Barragáns-Martínez, E. Costa-Montenegro, J. C. Burguillo, M. Rey-López,F.A. Mikic-Fonte, A. Peleteiro, “A hybrid content-based and item-based collaborative filtering approach to recommend TV programs enhanced with singular value decomposition,” Information Sciences, vol. 180, no. 22, pp. 4290-4311,2010. Article (CrossRef Link). https://doi.org/10.1016/j.ins.2010.07.024
- M. Deshpande, and G. Karypis, “Item-based top-N recommendation algorithms,” ACM Transactions on Information Systems, vol. 22, no. 1, pp. 143-177, 2004. Article (CrossRef Link). https://doi.org/10.1145/963770.963776
- K. Priyanka, A. S. Tewari, A. G. Barman, "Personalised book recommendation system based on opinion mining technique," in Proc. of Communication Technologies (GCCT), 2015 Global Conference on. IEEE, pp. 285-289, 2015. Article (CrossRef Link).
- M. G. Armentano, S. Schiaffino, I. Christensen, F. Boato, “Movies Recommendation Based on Opinion Mining in Twitter,” Advances in Artificial Intelligence and Its Applications. Springer International Publishing, pp. 80-91, 2015. Article (CrossRef Link).
- A. S. Tewari, A. Saroj, A. G. Barman, "e-Learning Recommender System for Teachers using Opinion Mining," Information Science and Applications. Springer Berlin Heidelberg, pp. 1021-1029, 2015. Article (CrossRef Link).
- S. Aciar, D. Zhang, S. Simoff, J. Debenham, “Informed recommender: Basing recommendations on consumer product reviews,” IEEE Intelligent Systems, vol. 22, no. 3, pp. 39-47, 2007. Article (CrossRef Link). https://doi.org/10.1109/MIS.2007.55
- N. Jakob, S. H. Weber, M. C. Müller, I. Gurevych, "Beyond the stars: exploiting free-text user reviews to improve the accuracy of movie recommendations," in Proc. of the 1st International CIKM Workshop on Topic-sentiment Analysis for Mass Opinion, Hong Kong, China, pp. 57-64, 2009. Article (CrossRef Link).
- M. Pazzani, and D. Billsus, “Learning and revising user profiles: The identification of interesting web sites,” Machine learning, vol. 27, no. 3, pp. 313-331, 1997. Article (CrossRef Link). https://doi.org/10.1023/A:1007369909943
- G. Salton, "Automatic text processing," ed: Addison Wesley, 1989. Article (CrossRef Link).
- L. A. Adamic, and E. Adar, “Friends and neighbors on the Web,” Social Networks, vol. 25, no. 3, pp. 211-230, 2003. Article (CrossRef Link). https://doi.org/10.1016/S0378-8733(03)00009-1
- T. Nathanson, E. Bitton, and K. Goldberg, "Eigentaste 5.0: constant-time adaptability in a recommender system using item clustering," in Proc. of the 2007 ACM Conference on Recommender Systems, Minneapolis, MN, USA, pp. 149-152, 2007. Article (CrossRef Link).
- M. Richardson, and P. Domingos, "Mining knowledge-sharing sites for viral marketing," in Proc. of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, Alberta, Canada, pp. 61-70, 2002. Article (CrossRef Link).
- Y. Z. Wei, L. Moreau, and N. R. Jennings, “Learning users' interests by quality classification in market-based recommender systems,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 12, pp. 1678-1688, 2005. Article (CrossRef Link). https://doi.org/10.1109/TKDE.2005.200
- C. Zeng, C. X. Xing, and L. Z. Zhou, "Similarity measure and instance selection for collaborative filtering," in Proc. of the 12th International Conference on World Wide Web, pp. 652-658, 2003. Article (CrossRef Link).
- X. Su, and T. M. Khoshgoftaar, “A survey of collaborative filtering techniques,” Advances in Artificial Intelligence, vol. 2009, pp. 2-2, 2009. Article (CrossRef Link). https://doi.org/10.1155/2009/421425
- J. Borràs, A. Moreno, A. Valls, “Intelligent tourism recommender systems: A survey,” Expert Systems with Applications, vol. 41, no. 16, pp. 7370-7389, 2014. Article (CrossRef Link). https://doi.org/10.1016/j.eswa.2014.06.007
- I. Garcia, L. Sebastia, E. Onaindia, “On the design of individual and group recommender systems for tourism,” Expert Systems with Applications, vol. 38, no. 6, pp. 7683–7692, 2011. Article (CrossRef Link). https://doi.org/10.1016/j.eswa.2010.12.143
- M. Al-Hassan, H. Lu, J. Lu, “A semantic enhanced hybrid recommendation approach: A case study of e-Government tourism service recommendation system,” Decision Support Systems, vol. 72, pp. 97-109. Article (CrossRef Link). https://doi.org/10.1016/j.dss.2015.02.001
- X. Qiao, L. Zhang, “Overseas Applied Studies on Travel Recommender Systems in the Past Ten Years,” Tourism Tribune, vol. 29, no. 8, pp. 117-127, 2014. Article (CrossRef Link).
- J. Bernabé-Moreno, A. Tejeda-Lorente, C. Porcel, H. Fujita,E. Herrera-Viedma, “CARESOME: A system to enrich marketing customers acquisition and retention campaigns using social media information,” Knowledge-Based Systems, vol. 80, pp. 163-179, 2015. Article (CrossRef Link). https://doi.org/10.1016/j.knosys.2014.12.033
- B. A. Sparks, H. E. Perkins, and R. Buckley, “Online travel reviews as persuasive communication: The effects of content type, source, and certification logos on consumer behavior,” Tourism Management, vol. 39, pp. 1-9, 2013. Article (CrossRef Link). https://doi.org/10.1016/j.tourman.2013.03.007
- W. Lu, and S. Stepchenkova, “Ecotourism experiences reported online: Classification of satisfaction attributes,” Tourism Management, vol. 33, no. 3, pp. 702-712, 2012. Article (CrossRef Link). https://doi.org/10.1016/j.tourman.2011.08.003
- H. R. Seddighi, and A. L. Theocharous, “A model of tourism destination choice: a theoretical and empirical analysis,” Tourism Management, vol. 23, no. 5, pp. 475-487, 2002. Article (CrossRef Link). https://doi.org/10.1016/S0261-5177(02)00012-2
- X. Wenhai, “A Chinese Keyword Extraction Algorithm Based on TFIDF Method,” Information Studies: Theory & Application, vol. 2, pp. 298-302, 2008. Article (CrossRef Link).
- Y. Bao, H. Fang, and J. Zhang, "Topicmf: Simultaneously exploiting ratings and reviews for recommendation," in Proc. of the 28th AAAI Conference on Artificial Intelligence, pp. 2-8, 2014. Article (CrossRef Link).
- A. Ghose, and P. G. Ipeirotis, “Estimating the Helpfulness and Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 10, pp. 1498-1512, 2011. Article (CrossRef Link). https://doi.org/10.1109/TKDE.2010.188
- K. S. . Jones, “IDF term weighting and IR research lessons,” Journal of Documentation, vol. 60, no. 5, pp. 521-523, 2004. Article (CrossRef Link). https://doi.org/10.1108/00220410410560591
- K. S. Jones, “A statistical interpretation of term specificity and its application in retrieval,” Journal of Documentation, vol. 60, no. 5, pp. 493-502, 1972. Article (CrossRef Link). https://doi.org/10.1108/00220410410560573
- J. L. Herlocker, J. A. Konstan, L. G. Terveen, J. T. Riedl, “Evaluating collaborative filtering recommender systems,” ACM Transactions on Information Systems, vol. 22, no. 1, pp. 5-53, 2004. Article (CrossRef Link). https://doi.org/10.1145/963770.963772
- J. Bobadilla, F. Ortega, A. Hernando, J. Bernal, “A collaborative filtering approach to mitigate the new user cold start problem,” Knowledge-Based Systems, vol. 26, pp. 225-238, 2012. Article (CrossRef Link). https://doi.org/10.1016/j.knosys.2011.07.021
Cited by
- Recommendations Based on Listwise Learning-to-Rank by Incorporating Social Information vol.12, pp.1, 2016, https://doi.org/10.3837/tiis.2018.01.006
- A tourism destination recommender system using users’ sentiment and temporal dynamics vol.51, pp.3, 2016, https://doi.org/10.1007/s10844-018-0496-5
- A Review of Text Corpus-Based Tourism Big Data Mining vol.9, pp.16, 2016, https://doi.org/10.3390/app9163300
- Tourist Recommender Systems Based on Emotion Recognition-A Scientometric Review vol.13, pp.1, 2016, https://doi.org/10.3390/fi13010002
- Research on power-law distribution of long-tail data and its application to tourism recommendation vol.121, pp.6, 2021, https://doi.org/10.1108/imds-10-2019-0584
- Tourist Experiences Recommender System Based on Emotion Recognition with Wearable Data vol.21, pp.23, 2016, https://doi.org/10.3390/s21237854