• Title/Summary/Keyword: Network Computer

Search Result 12,636, Processing Time 0.036 seconds

Usefulness of Data Mining in Criminal Investigation (데이터 마이닝의 범죄수사 적용 가능성)

  • Kim, Joon-Woo;Sohn, Joong-Kweon;Lee, Sang-Han
    • Journal of forensic and investigative science
    • /
    • v.1 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • Data mining is an information extraction activity to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results. Typical applications include market segmentation, customer profiling, fraud detection, evaluation of retail promotions, and credit risk analysis. Law enforcement agencies deal with mass data to investigate the crime and its amount is increasing due to the development of processing the data by using computer. Now new challenge to discover knowledge in that data is confronted to us. It can be applied in criminal investigation to find offenders by analysis of complex and relational data structures and free texts using their criminal records or statement texts. This study was aimed to evaluate possibile application of data mining and its limitation in practical criminal investigation. Clustering of the criminal cases will be possible in habitual crimes such as fraud and burglary when using data mining to identify the crime pattern. Neural network modelling, one of tools in data mining, can be applied to differentiating suspect's photograph or handwriting with that of convict or criminal profiling. A case study of in practical insurance fraud showed that data mining was useful in organized crimes such as gang, terrorism and money laundering. But the products of data mining in criminal investigation should be cautious for evaluating because data mining just offer a clue instead of conclusion. The legal regulation is needed to control the abuse of law enforcement agencies and to protect personal privacy or human rights.

  • PDF

GIS-based Market Analysis and Sales Management System : The Case of a Telecommunication Company (시장분석 및 영업관리 역량 강화를 위한 통신사의 GIS 적용 사례)

  • Chang, Nam-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.61-75
    • /
    • 2011
  • A Geographic Information System(GIS) is a system that captures, stores, analyzes, manages and presents data with reference to geographic location data. In the later 1990s and earlier 2000s it was limitedly used in government sectors such as public utility management, urban planning, landscape architecture, and environmental contamination control. However, a growing number of open-source packages running on a range of operating systems enabled many private enterprises to explore the concept of viewing GIS-based sales and customer data over their own computer monitors. K telecommunication company has dominated the Korean telecommunication market by providing diverse services, such as high-speed internet, PSTN(Public Switched Telephone Network), VOLP (Voice Over Internet Protocol), and IPTV(Internet Protocol Television). Even though the telecommunication market in Korea is huge, the competition between major services providers is growing more fierce than ever before. Service providers struggled to acquire as many new customers as possible, attempted to cross sell more products to their regular customers, and made more efforts on retaining the best customers by offering unprecedented benefits. Most service providers including K telecommunication company tried to adopt the concept of customer relationship management(CRM), and analyze customer's demographic and transactional data statistically in order to understand their customer's behavior. However, managing customer information has still remained at the basic level, and the quality and the quantity of customer data were not enough not only to understand the customers but also to design a strategy for marketing and sales. For example, the currently used 3,074 legal regional divisions, which are originally defined by the government, were too broad to calculate sub-regional customer's service subscription and cancellation ratio. Additional external data such as house size, house price, and household demographics are also needed to measure sales potential. Furthermore, making tables and reports were time consuming and they were insufficient to make a clear judgment about the market situation. In 2009, this company needed a dramatic shift in the way marketing and sales activities, and finally developed a dedicated GIS_based market analysis and sales management system. This system made huge improvement in the efficiency with which the company was able to manage and organize all customer and sales related information, and access to those information easily and visually. After the GIS information system was developed, and applied to marketing and sales activities at the corporate level, the company was reported to increase sales and market share substantially. This was due to the fact that by analyzing past market and sales initiatives, creating sales potential, and targeting key markets, the system could make suggestions and enable the company to focus its resources on the demographics most likely to respond to the promotion. This paper reviews subjective and unclear marketing and sales activities that K telecommunication company operated, and introduces the whole process of developing the GIS information system. The process consists of the following 5 modules : (1) Customer profile cleansing and standardization, (2) Internal/External DB enrichment, (3) Segmentation of 3,074 legal regions into 46,590 sub_regions called blocks, (4) GIS data mart design, and (5) GIS system construction. The objective of this case study is to emphasize the need of GIS system and how it works in the private enterprises by reviewing the development process of the K company's market analysis and sales management system. We hope that this paper suggest valuable guideline to companies that consider introducing or constructing a GIS information system.

A Study on Estimating Optimal Tonnage of Coastal Cargo Vessels in Korea (우리나라 연안화물선의 적정선복량 추정에 관한 연구)

  • 이청환;이철영
    • Journal of the Korean Institute of Navigation
    • /
    • v.13 no.1
    • /
    • pp.21-53
    • /
    • 1989
  • In the past twenty years, there has been a rapid increase in the volume of traffic in Korea due to the Korean great growth of the Korean economy. Since transformation provides an infrastructure vital to economic growth, it becomes more and more an integral part of the Korea economy. The importance of coastal shipping stands out in particular, not only because of the expansion limit on the road network, but also because of saturation in the capacity of rail transportation. In spite of this increase and its importance, coastal shipping is falling behind partly because it is givenless emphasis than ocean-going shipping and other inland transportation systems and partly because of overcompetition due to excessive ship tonnage. Therefore, estimating and planning optimum ship tonnage is the first take to develop Korean coastal shipping. This paper aims to estimate the optimum coastal ship tonnage by computer simulation and finally to draw up plans for the ship tonnage balance according to supply and demand. The estimation of the optimum ship tonnage is peformed by the method of Origin -Destimation and time series analysis. The result are as follows : (1) The optimum ship tonnage in 1987 was 358, 680 DWT, which is 54% of the current ship tonnage (481 ships, 662, 664DWT) that is equal to the optimum ship tonnage in 1998. this overcapacity result is in excessive competition and financial difficulties in Korea coastal shipping. (2) The excessive ship tonnage can be broken down into ship types as follows : oil carrier 250, 926 DWT(350%), cement carrier 9, 977 DWT(119%), iron material/machinery carrier 25, 665 DWT(117%), general cargo carrier 17, 416DWT(112%). (3) the current total ship crew of 5, 079 is more than the verified optimally efficient figure of 3, 808 by 1271. (4) From the viewpoint of management strategy, it is necessary that excessive ship tonnage be reduced and uneconomic outdated vessels be broken up. And its found that the diversion into economically efficient fleets is urgently required in order to meet increasing annual rate in the amounts of cargo(23, 877DWT). (5) The plans for the ship tonnage balance according to supply and demand are as follows 1) The establishment of a legislative system for the arrangement of ship tonnage. This would involve; (a) The announcement of an optimum tonnage which guides the licensing of cargo vessels and ship tonnage supply. (b) The establishment of an organization that substantially arrangement tonnage in Korea coastal shipping. 2) The announcement of an optimum ship tonnage both per year and short-term that guides current tonnage supply plans. 3) The settlement of elastic tariffs resulting in the protect6ion of coastal shipping's share from other tonnage supply plans. 4) The settlement of elastic tariffs resulting in the protection of coastal shipping's share from other transportation systems. 4) Restriction of ocean-going vessels from participating in coastal shipping routes. 5) Business rationalization of coastal shipping company which reduces uneconomic outdated vessels and boosts the national economy. If we are to achieve these ends, the followings are prerequisites; I) Because many non-licensed vessels are actually operating and threatening the safe voyage of the others in Korea coastal routes, it is necessary that those ind of vessels be controlled and punished by the authorities. II) The supply of ship tonnage in Korean coastal routes should be predently monitored because most of the coastal vessels are to small to be diverted into ocean-going routes in case of excessive supply. III) Every ship type which is engaged in coastal shipping should be specialized according to the characteristics of its routes as soon possible.

  • PDF

Designing female-oriented computer games: Emotional expression

  • Shui, Lin-Lin;Lee, Won-Jung
    • Cartoon and Animation Studies
    • /
    • s.20
    • /
    • pp.75-86
    • /
    • 2010
  • Recently, as the number of female players has increased rapidly, the electronic gaming industry has begun to look at ways to appeal to the largely untapped female market. According to the latest game market investigative report by China Internet Network Information Center (CNNIC), the total number of game players in China increased by 24.8% in 2009, reached 69,130,000 people, and 38.9% of them are female players. This growth in the number of female player is corroborated by a series of investigative reports from IResearch Company in Shanghai, China: from 2003 to 2009, the number of female players grew from 8% to more than 49%. Therefore, no matter how much attention the game production companies have given to male players or how they have ignored the female players before, the companies would be sensible to face up this reality and adjust their marketing policy a bit more. This article analyzes gender preferences in video games which shows that male players are more likely to be attracted to elements of aggression, violence, competition and fast action in electronic game-playing, while female players are drawn to emotional and social aspects of the games such as an understanding of character relationships. The literatures cited indicates that female players also show apparent preference for games with familiar environments, games that allow players to work together, games that have more than one way to win, and games in which characters do not die. It also discusses the characteristics of female-friendly games from the aspect of emotion, pointing out that the simulation games involving pet, dressing-up, and social simulation games are very popular with female players. Because these are the most suitable game types to fill with emotions of love, share, jealousy, superiority, mystery, these are absolutely attractive to female players. Finally, in accord with the above, I propose some principles of designing female-oriented games, including presenting a good-looking leading character, making the story interesting with "live" NPCs(Non-Playing Characters), and finding ways to satisfy female nature instincts such as taking care of others and the inborn interest of classifying and selecting.

  • PDF

The Effect of Herding Behavior and Perceived Usefulness on Intention to Purchase e-Learning Content: Comparison Analysis by Purchase Experience (무리행동과 지각된 유용성이 이러닝 컨텐츠 구매의도에 미치는 영향: 구매경험에 의한 비교분석)

  • Yoo, Chul-Woo;Kim, Yang-Jin;Moon, Jung-Hoon;Choe, Young-Chan
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.105-130
    • /
    • 2008
  • Consumers of e-learning market differ from those of other markets in that they are replaced in a specific time scale. For example, e-learning contents aimed at highschool senior students cannot be consumed by a specific consumer over the designated period of time. Hence e-learning service providers need to attract new groups of students every year. Due to lack of information on products designed for continuously emerging consumers, the consumers face difficulties in making rational decisions in a short time period. Increased uncertainty of product purchase leads customers to herding behaviors to obtain information of the product from others and imitate them. Taking into consideration of these features of e-learning market, this study will focus on the online herding behavior in purchasing e-learning contents. There is no definite concept for e-learning. However, it is being discussed in a wide range of perspectives from educational engineering to management to e-business etc. Based upon the existing studies, we identify two main view-points regarding e-learning. The first defines e-learning as a concept that includes existing terminologies, such as CBT (Computer Based Training), WBT (Web Based Training), and IBT (Internet Based Training). In this view, e-learning utilizes IT in order to support professors and a part of or entire education systems. In the second perspective, e-learning is defined as the usage of Internet technology to deliver diverse intelligence and achievement enhancing solutions. In other words, only the educations that are done through the Internet and network can be classified as e-learning. We take the second definition of e-learning for our working definition. The main goal of this study is to investigate what factors affect consumer intention to purchase e-learning contents and to identify the differential impact of the factors between consumers with purchase experience and those without the experience. To accomplish the goal of this study, it focuses on herding behavior and perceived usefulness as antecedents to behavioral intention. The proposed research model in the study extends the Technology Acceptance Model by adding herding behavior and usability to take into account the unique characteristics of e-learning content market and e-learning systems use, respectively. The current study also includes consumer experience with e-learning content purchase because the previous experience is believed to affect purchasing intention when consumers buy experience goods or services. Previous studies on e-learning did not consider the characteristics of e-learning contents market and the differential impact of consumer experience on the relationship between the antecedents and behavioral intention, which is the target of this study. This study employs a survey method to empirically test the proposed research model. A survey questionnaire was developed and distributed to 629 informants. 528 responses were collected, which consist of potential customer group (n = 133) and experienced customer group (n = 395). The data were analyzed using PLS method, a structural equation modeling method. Overall, both herding behavior and perceived usefulness influence consumer intention to purchase e-learning contents. In detail, in the case of potential customer group, herding behavior has stronger effect on purchase intention than does perceived usefulness. However, in the case of shopping-experienced customer group, perceived usefulness has stronger effect than does herding behavior. In sum, the results of the analysis show that with regard to purchasing experience, perceived usefulness and herding behavior had differential effects upon the purchase of e-learning contents. As a follow-up analysis, the interaction effects of the number of purchase transaction and herding behavior/perceived usefulness on purchase intention were investigated. The results show that there are no interaction effects. This study contributes to the literature in a couple of ways. From a theoretical perspective, this study examined and showed evidence that the characteristics of e-learning market such as continuous renewal of consumers and thus high uncertainty and individual experiences are important factors to be considered when the purchase intention of e-learning content is studied. This study can be used as a basis for future studies on e-learning success. From a practical perspective, this study provides several important implications on what types of marketing strategies e-learning companies need to build. The bottom lines of these strategies include target group attraction, word-of-mouth management, enhancement of web site usability quality, etc. The limitations of this study are also discussed for future studies.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Smartphone Security Using Fingerprint Password (다중 지문 시퀀스를 이용한 스마트폰 보안)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.45-55
    • /
    • 2013
  • Thereby using smartphone and mobile device be more popular the more people utilize mobile device in many area such as education, news, financial. In January, 2007 Apple release i-phone it touch off rapid increasing in user of smartphone and it create new market and these broaden its utilization area. Smartphone use WiFi or 3G mobile radio communication network and it has a feature that can access to internet whenever and anywhere. Also using smartphone application people can search arrival time of public transportation in real time and application is used in mobile banking and stock trading. Computer's function is replaced by smartphone so it involves important user's information such as financial and personal pictures, videos. Present smartphone security systems are not only too simple but the unlocking methods are spreading out covertly. I-phone is secured by using combination of number and character but USA's IT magazine Engadget reveal that it is easily unlocked by using combination with some part of number pad and buttons Android operation system is using pattern system and it is known as using 9 point dot so user can utilize various variable but according to Jonathan smith professor of University of Pennsylvania Android security system is easily unlocked by tracing fingerprint which remains on the smartphone screen. So both of Android and I-phone OS are vulnerable at security threat. Compared with problem of password and pattern finger recognition has advantage in security and possibility of loss. The reason why current using finger recognition smart phone, and device are not so popular is that there are many problem: not providing reasonable price, breaching human rights. In addition, finger recognition sensor is not providing reasonable price to customers but through continuous development of the smartphone and device, it will be more miniaturized and its price will fall. So once utilization of finger recognition is actively used in smartphone and if its utilization area broaden to financial transaction. Utilization of biometrics in smart device will be debated briskly. So in this thesis we will propose fingerprint numbering system which is combined fingerprint and password to fortify existing fingerprint recognition. Consisted by 4 number of password has this kind of problem so we will replace existing 4number password and pattern system and consolidate with fingerprint recognition and password reinforce security. In original fingerprint recognition system there is only 10 numbers of cases but if numbering to fingerprint we can consist of a password as a new method. Using proposed method user enter fingerprint as invested number to the finger. So attacker will have difficulty to collect all kind of fingerprint to forge and infer user's password. After fingerprint numbering, system can use the method of recognization of entering several fingerprint at the same time or enter fingerprint in regular sequence. In this thesis we adapt entering fingerprint in regular sequence and if in this system allow duplication when entering fingerprint. In case of allowing duplication a number of possible combinations is $\sum_{I=1}^{10}\;{_{10}P_i}$ and its total cases of number is 9,864,100. So by this method user retain security the other hand attacker will have a number of difficulties to conjecture and it is needed to obtain user's fingerprint thus this system will enhance user's security. This system is method not accept only one fingerprint but accept multiple finger in regular sequence. In this thesis we introduce the method in the environment of smartphone by using multiple numbered fingerprint enter to authorize user. Present smartphone authorization using pattern and password and fingerprint are exposed to high risk so if proposed system overcome delay time when user enter their finger to recognition device and relate to other biometric method it will have more concrete security. The problem should be solved after this research is reducing fingerprint's numbering time and hardware development should be preceded. If in the future using fingerprint public certification becomes popular. The fingerprint recognition in the smartphone will become important security issue so this thesis will utilize to fortify fingerprint recognition research.

A Study for the Methodology of Analyzing the Operation Behavior of Thermal Energy Grids with Connecting Operation (열 에너지 그리드 연계운전의 운전 거동 특성 분석을 위한 방법론에 관한 연구)

  • Im, Yong Hoon;Lee, Jae Yong;Chung, Mo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.3
    • /
    • pp.143-150
    • /
    • 2012
  • A simulation methodology and corresponding program based on it is to be discussed for analyzing the effects of the networking operation of existing DHC system in connection with CHP system on-site. The practical simulation for arbitrary areas with various building compositions is carried out for the analysis of operational features in both systems, and the various aspects of thermal energy grids with connecting operation are highlighted through the detailed assessment of predicted results. The intrinsic operational features of CHP prime movers, gas engine, gas turbine etc., are effectively implemented by realizing the performance data, i.e. actual operation efficiency in the full and part loads range. For the sake of simplicity, a simple mathematical correlation model is proposed for simulating various aspects of change effectively on the existing DHC system side due to the connecting operation, instead of performing cycle simulations separately. The empirical correlations are developed using the hourly based annual operation data for a branch of the Korean District Heating Corporation (KDHC) and are implicit in relation between main operation parameters such as fuel consumption by use, heat and power production. In the simulation, a variety of system configurations are able to be considered according to any combination of the probable CHP prime-movers, absorption or turbo type cooling chillers of every kind and capacity. From the analysis of the thermal network operation simulations, it is found that the newly proposed methodology of mathematical correlation for modelling of the existing DHC system functions effectively in reflecting the operational variations due to thermal energy grids with connecting operation. The effects of intrinsic features of CHP prime-movers, e.g. the different ratio of heat and power production, various combinations of different types of chillers (i.e. absorption and turbo types) on the overall system operation are discussed in detail with the consideration of operation schemes and corresponding simulation algorithms.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.