• Title/Summary/Keyword: 한국정보관리

Search Result 30,331, Processing Time 0.063 seconds

Comparative Analysis of Ginsenoside Content in Processed Red Ginseng Foods Based on Food Type and Formulation (홍삼가공식품의 식품유형별 및 제형별 진세노사이드 함량 비교)

  • Yun-Jeong Yi;Min-Su Chang;In-Sook Lee;Hyun-Jeong Kim;Hyun-Jeong Jang;In-Sook Hwang
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.163-170
    • /
    • 2024
  • Red ginseng is manufactured as a health-functional food and is also present in various food types and in different product forms. However, there is currently no standardized regulation of ginsenoside content in foods containing red ginseng. In the present study, we analyzed the ginsenoside content of 66 red ginseng-containing foods and 35 health-functional foods collected online and directly from the market. The ginsenoside content was assessed using liquid chromatography (LC) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) methods. The ginsenoside content of the various food types ranged 0.0 (not detected)-71.567 mg per daily intake of foods containing red ginseng. Sugar-preserved foods had the highest ginsenoside content, followed by solid teas, liquid teas, and red ginseng beverages. For health-functional foods, the ginsenoside content ranged 3.4-58.5 mg per daily intake, with levels ranging 83-607% of the indicated amounts. All values met the established standards. Upon comparing red ginseng health-functional foods and red ginseng-containing foods, the average ginsenoside content was determined to be 18.21 and 8.79 mg, respectively, thus being nearly twice as high in health-functional foods. However, there was a minimal difference between the ginsenoside content of red and black ginseng, with values of 11.84 and 12.63 mg, respectively. These findings provide insights on the variations in ginsenoside content of red and black ginseng in various food forms. This information is expected to be valuable for future regulations and consumer choice of products containing red ginseng.

Effects of Different Altitudes and Cultivation Methods on Growth and Flowering Characteristics of Elsholtzia splendens (재배지대와 유형이 꽃향유의 생육 및 개화 특성에 미치는 영향)

  • Young Min Choi;Jin Jae Lee;Dong Chun Cheong;Hong Ki Kim;Hee Kyung Song;Seung Yoon Lee;So Ra Choi;Hyun Ah Han;Han Na Chu
    • Korean Journal of Plant Resources
    • /
    • v.37 no.4
    • /
    • pp.392-400
    • /
    • 2024
  • This study was conducted to find the flowering and growth characteristics according to the different altitudes (plains and mid-mountain regions) and cultivation methods (field and plastic houses cultivation) of Elsholtzia splendens. Experimental regions located at 12 meters and 500 meters above sea level were selected for the plains and the mid-mountain, respectively, and the same method was applied for cultivation management by different altitudes and cultivation methods. In the mid-mountain region, flower bud emergence (2-3 days), flowering (9 days), and full bloom (6-7 days) stages of Elsholtzia splendens were earlier than in the plains, and field cultivation was earlier than plastic house cultivation. The plant height, the main stem diameter, and the number of branches tended to increase gradually after an initial rapid growth at 59 to 69 days after planting date. The days of duration of sunshine (less than 8 hours) from the rainy season (June 20) to the period when vegetative growth increases gradually (59 to 69 days after planting) was 22 to 29 days and 26 to 35 days in the plains and the mid-mountain regions respectively, and this period was estimated time of transition from vegetative growth to reproductive growth. The spikes growth of Elsholtzia splendens by cultivation altitudes was higher in the mid-mountain region than in the plains, and there were no statistically significant differences in growth characteristics except for the main stem diameter, the number of branches, and the dry matter. Also, the amount of flowering and growth was higher in the plastic house cultivation compared to the field cultivation. As a result, some differences in flowering amount were observed when cultivating Elsholtzia splendens for landscaping purposes, but it was considered possible to cultivate in both plains and mid-mountain regions. This study therefore provides ecological information for understanding the relationship between weather characteristics and growth of Elsholtzia splendens.

A Study on the Development and Validation of Three Systems of Action Scale in Home Economics for Middle and High School Students (중⋅고등학생용 가정교과 세 행동체계 척도 개발 및 타당화 연구)

  • Choi, Seong Youn
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.3
    • /
    • pp.67-96
    • /
    • 2023
  • The purpose of this study was to develop and validate a scale that can grasp the reality of the three systems of action for middle and high school students in home economics. For this purpose, a total of 105 questions, 35 questions for each systems of action, were developed as a 5-point Likert scale in order to measure technical action, communicative action, and emancipative action as preliminary questions by reviewing domestic and international literature related to the three systems of action. The procedure for revising and supplementing the developed preliminary questions by reviewing the content validity of the home economics education expert was executed twice. A preliminary survey was conducted on middle and high school students with 70 developed preliminary questions, and 166 copies were collected. As a result of exploratory factor analysis of the collected questionnaires to test the validity of the scale, it was found that 38 questions 7 factors were appropriate. After constructing this survey based on the results of exploratory factor analysis, this survey was conducted on middle and high school students, and 548 copies were collected and a confirmatory factor analysis was performed. A total of 38 questions were finally selected through confirmatory factor analysis, including basic living ability 5 questions, self-management ability 4 questions, information processing ability 4 questions, communication/interpersonal ability 12 questions, critical thinking ability 3 questions, decision-making ability 7 questions, empowerment 3 questions. The Model Fit was χ2=1846.741(p<.001), CFI=0.865, TLI=0.853, RMSEA=0.058, and the Standardized Regression Weights for each question was more than 0.5, so it can be seen as a suitable measurement instrument for measuring the status of the three systems of action of middle and high school students in home economics. The three systems of action scales were found to have significant correlations with self-acceptance, future planning, intimacy, uniqueness, which are sub-factors of the self-identity scale, and social participation scales therefore confirmed that they have recognized concurrent validity.

Open Skies Policy : A Study on the Alliance Performance and International Competition of FFP (항공자유화정책상 상용고객우대제도의 제휴성과와 국제경쟁에 관한 연구)

  • Suh, Myung-Sun;Cho, Ju-Eun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.2
    • /
    • pp.139-162
    • /
    • 2010
  • In terms of the international air transport, the open skies policy implies freedom in the sky or opening the sky. In the normative respect, the open skies policy is a kind of open-door policy which gives various forms of traffic right to other countries, but on the other hand it is a policy of free competition in the international air transport. Since the Airline Deregulation Act of 1978, the United States has signed an open skies agreement with many countries, starting with the Netherlands, so that competitive large airlines can compete in the international air transport market where there exist a lot of business opportunities. South Korea now has an open skies agreement with more than 20 countries. The frequent flyer program (FFP) is part of a broad-based marketing alliance which has been used as an airfare strategy since the U.S. government's airline deregulation. The membership-based program is an incentive plan that provides mileage points to customers for using airline services and rewards customer loyalty in tangible forms based on their accumulated points. In its early stages, the frequent flyer program was focused on marketing efforts to attract customers, but now in the environment of intense competition among airlines, the program is used as an important strategic marketing tool for enhancing business performance. Therefore, airline companies agree that they need to identify customer needs in order to secure loyal customers more effectively. The outcomes from an airline's frequent flyer program can have a variety of effects on international competition. First, the airline can obtain a more dominant position in the air flight market by expanding its air route networks. Second, the availability of flight products for customers can be improved with an increase in flight frequency. Third, the airline can preferentially expand into new markets and thus gain advantages over its competitors. However, there are few empirical studies on the airline frequent flyer program. Accordingly, this study aims to explore the effects of the program on international competition, after reviewing the types of strategic alliance between airlines. Making strategic airline alliances is a worldwide trend resulting from the open skies policy. South Korea also needs to be making open skies agreements more realistic to promote the growth and competition of domestic airlines. The present study is about the performance of the airline frequent flyer program and international competition under the open skies policy. With a sample of five global alliance groups (Star, Oneworld, Wings, Qualiflyer and Skyteam), the study was attempted as an empirical study of the effects that the resource structures and levels of information technology held by airlines in each group have on the type of alliance, and one-way analysis of variance and regression analysis were used to test hypotheses. The findings of this study suggest that both large airline companies and small/medium-size airlines in an alliance group with global networks and organizations are able to achieve high performance and secure international competitiveness. Airline passengers earn mileage points by using non-flight services through an alliance network with hotels, car-rental services, duty-free shops, travel agents and more and show high interests in and preferences for related service benefits. Therefore, Korean airline companies should develop more aggressive marketing programs based on multilateral alliances with other services including hotels, as well as with other airlines.

  • PDF

A Study on the Influence of IT Education Service Quality on Educational Satisfaction, Work Application Intention, and Recommendation Intention: Focusing on the Moderating Effects of Learner Position and Participation Motivation (IT교육 서비스품질이 교육만족도, 현업적용의도 및 추천의도에 미치는 영향에 관한 연구: 학습자 직위 및 참여동기의 조절효과를 중심으로)

  • Kang, Ryeo-Eun;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.169-196
    • /
    • 2017
  • The fourth industrial revolution represents a revolutionary change in the business environment and its ecosystem, which is a fusion of Information Technology (IT) and other industries. In line with these recent changes, the Ministry of Employment and Labor of South Korea announced 'the Fourth Industrial Revolution Leader Training Program,' which includes five key support areas such as (1) smart manufacturing, (2) Internet of Things (IoT), (3) big data including Artificial Intelligence (AI), (4) information security, and (5) bio innovation. Based on this program, we can get a glimpse of the South Korean government's efforts and willingness to emit leading human resource with advanced IT knowledge in various fusion technology-related and newly emerging industries. On the other hand, in order to nurture excellent IT manpower in preparation for the fourth industrial revolution, the role of educational institutions capable of providing high quality IT education services is most of importance. However, these days, most IT educational institutions have had difficulties in providing customized IT education services that meet the needs of consumers (i.e., learners), without breaking away from the traditional framework of providing supplier-oriented education services. From previous studies, it has been found that the provision of customized education services centered on learners leads to high satisfaction of learners, and that higher satisfaction increases not only task performance and the possibility of business application but also learners' recommendation intention. However, since research has not yet been conducted in a comprehensive way that consider both antecedent and consequent factors of the learner's satisfaction, more empirical research on this is highly desirable. With the advent of the fourth industrial revolution, a rising interest in various convergence technologies utilizing information technology (IT) has brought with the growing realization of the important role played by IT-related education services. However, research on the role of IT education service quality in the context of IT education is relatively scarce in spite of the fact that research on general education service quality and satisfaction has been actively conducted in various contexts. In this study, therefore, the five dimensions of IT education service quality (i.e., tangibles, reliability, responsiveness, assurance, and empathy) are derived from the context of IT education, based on the SERVPERF model and related previous studies. In addition, the effects of these detailed IT education service quality factors on learners' educational satisfaction and their work application/recommendation intentions are examined. Furthermore, the moderating roles of learner position (i.e., practitioner group vs. manager group) and participation motivation (i.e., voluntary participation vs. involuntary participation) in relationships between IT education service quality factors and learners' educational satisfaction, work application intention, and recommendation intention are also investigated. In an analysis using the structural equation model (SEM) technique based on a questionnaire given to 203 participants of IT education programs in an 'M' IT educational institution in Seoul, South Korea, tangibles, reliability, and assurance were found to have a significant effect on educational satisfaction. This educational satisfaction was found to have a significant effect on both work application intention and recommendation intention. Moreover, it was discovered that learner position and participation motivation have a partial moderating impact on the relationship between IT education service quality factors and educational satisfaction. This study holds academic implications in that it is one of the first studies to apply the SERVPERF model (rather than the SERVQUAL model, which has been widely adopted by prior studies) is to demonstrate the influence of IT education service quality on learners' educational satisfaction, work application intention, and recommendation intention in an IT education environment. The results of this study are expected to provide practical guidance for IT education service providers who wish to enhance learners' educational satisfaction and service management efficiency.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

    • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
      • Journal of Intelligence and Information Systems
      • /
      • v.20 no.3
      • /
      • pp.109-131
      • /
      • 2014
    • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

    Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

    • Lee, Jung Seung
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.4
      • /
      • pp.35-52
      • /
      • 2019
    • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

    Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

    • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.141-166
      • /
      • 2019
    • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

    A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

    • Kim, Hyung Su;Hong, Seung Woo
      • Journal of Intelligence and Information Systems
      • /
      • v.26 no.4
      • /
      • pp.111-126
      • /
      • 2020
    • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.