• 제목/요약/키워드: Knowledge management systems

Search Result 1,447, Processing Time 0.031 seconds

A Study on Recent Research Trend in New Product Development Using Keyword Network Analysis (키워드 네트워크 분석을 이용한 NPD 연구의 진화 및 연구동향)

  • Pyun, JeBum;Jeong, EuiBeom
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.5
    • /
    • pp.119-134
    • /
    • 2018
  • Today, many firms face the environment of high uncertainty and severe competition due to the rapid technology development and the diverse needs of customers. In the business environment, one of the most important ways to gain sustainable competitive advantage and future growth engine is related to NPD (New Product Development), which is a very important issue for practice and academia. Thus, this study intends to provide new values to practitioners and researchers related to NPD by analyzing current research trends and future trends in NPD field. For this, we bibliometrically analyzed keyword networks which consist of keywords that were already published in the eminent journals from Scopus database to generate insights that have not been captured in the previous reviews on the topic. As a result, we could understand the extant research streams in NPD field, and suggest the changes of specific research topics based on the connected relationships among keywords over the time. In addition, we also foresaw the general future research trends in NPD field based on the keywords according to preferential attachment processes. Through this study, it was confirmed that NPD keyword network is a small world network that follows the distribution of power law and the growth of network is formed by link formation by keyword preferential attachment. In addition, through component analysis and centrality analysis, keywords such as Innovation, New product innovation, Risk management, Concurrent engineering, Research and development, and Product life cycle management are highly centralized in NPD keyword network. On the other hand, as a result of examining the change of preferential attachment of keywords over the time, we suggested the required new research direction including i) NPD collaboration with suppliers, ii) NPD considering market uncertainty, iii) NPD considering convergence with the other academic areas like technology management and knowledge management, iv) NPD from SME(Small and medium enterprises) perspective. The results of this study can be used to determine the research trends of NPD and the new research themes for interdisciplinary studies with other disciplines.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

Membership Fluidity and Knowledge Collaboration in Virtual Communities: A Multilateral Approach to Membership Fluidity (가상 커뮤니티의 멤버 유동성과 지식 협업: 멤버 유동성에 대한 다각적 접근)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.19-47
    • /
    • 2015
  • In this era of knowledge economy, a variety of virtual communities are proliferating for the purpose of knowledge creation and utilization. Since the voluntary contributions of members are the essential source of knowledge, member turnover can have significant implications on the survival and success of virtual communities. However, there is a dearth of research on the effect of membership turnover and even the method of measurement for membership turnover is left unclear in virtual communities. In a traditional context, membership turnover is calculated as the ratio of the number of departing members to the average number of members for a given time period. In virtual communities, while the influx of newcomers can be clearly measured, the magnitude of departure is elusive since explicit withdrawals are seldom executed. In addition, there doesn't exist a common way to determine the average number of community members who return and contribute intermittently at will. This study initially examines the limitations in applying the concept of traditional turnover to virtual communities, and proposes five membership fluidity measures based on a preliminary analysis of editing behaviors of 2,978 featured articles in English Wikipedia. Subsequently, this work investigates the relationships between three selected membership fluidity measures and group collaboration performance, reflecting a moderating effect dependent on work characteristic. We obtained the following results: First, membership turnover relates to collaboration efficiency in a right-shortened U-shaped manner, with a moderating effect from work characteristic; given the same turnover rate, the promotion likelihood for a more professional task is lower than that for a less professional task, and the likelihood difference diminishes as the turnover rate increases. Second, contribution period relates to collaboration efficiency in a left-shortened U-shaped manner, with a moderating effect from work characteristic; the marginal performance change per unit change of contribution period is greater for a less professional task. Third, the number of new participants per month relates to collaboration efficiency in a left-shortened reversed U-shaped manner, for which the moderating effect from work characteristic appears to be insignificant.

Real-time CRM Strategy of Big Data and Smart Offering System: KB Kookmin Card Case (KB국민카드의 빅데이터를 활용한 실시간 CRM 전략: 스마트 오퍼링 시스템)

  • Choi, Jaewon;Sohn, Bongjin;Lim, Hyuna
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.1-23
    • /
    • 2019
  • Big data refers to data that is difficult to store, manage, and analyze by existing software. As the lifestyle changes of consumers increase the size and types of needs that consumers desire, they are investing a lot of time and money to understand the needs of consumers. Companies in various industries utilize Big Data to improve their products and services to meet their needs, analyze unstructured data, and respond to real-time responses to products and services. The financial industry operates a decision support system that uses financial data to develop financial products and manage customer risks. The use of big data by financial institutions can effectively create added value of the value chain, and it is possible to develop a more advanced customer relationship management strategy. Financial institutions can utilize the purchase data and unstructured data generated by the credit card, and it becomes possible to confirm and satisfy the customer's desire. CRM has a granular process that can be measured in real time as it grows with information knowledge systems. With the development of information service and CRM, the platform has change and it has become possible to meet consumer needs in various environments. Recently, as the needs of consumers have diversified, more companies are providing systematic marketing services using data mining and advanced CRM (Customer Relationship Management) techniques. KB Kookmin Card, which started as a credit card business in 1980, introduced early stabilization of processes and computer systems, and actively participated in introducing new technologies and systems. In 2011, the bank and credit card companies separated, leading the 'Hye-dam Card' and 'One Card' markets, which were deviated from the existing concept. In 2017, the total use of domestic credit cards and check cards grew by 5.6% year-on-year to 886 trillion won. In 2018, we received a long-term rating of AA + as a result of our credit card evaluation. We confirmed that our credit rating was at the top of the list through effective marketing strategies and services. At present, Kookmin Card emphasizes strategies to meet the individual needs of customers and to maximize the lifetime value of consumers by utilizing payment data of customers. KB Kookmin Card combines internal and external big data and conducts marketing in real time or builds a system for monitoring. KB Kookmin Card has built a marketing system that detects realtime behavior using big data such as visiting the homepage and purchasing history by using the customer card information. It is designed to enable customers to capture action events in real time and execute marketing by utilizing the stores, locations, amounts, usage pattern, etc. of the card transactions. We have created more than 280 different scenarios based on the customer's life cycle and are conducting marketing plans to accommodate various customer groups in real time. We operate a smart offering system, which is a highly efficient marketing management system that detects customers' card usage, customer behavior, and location information in real time, and provides further refinement services by combining with various apps. This study aims to identify the traditional CRM to the current CRM strategy through the process of changing the CRM strategy. Finally, I will confirm the current CRM strategy through KB Kookmin card's big data utilization strategy and marketing activities and propose a marketing plan for KB Kookmin card's future CRM strategy. KB Kookmin Card should invest in securing ICT technology and human resources, which are becoming more sophisticated for the success and continuous growth of smart offering system. It is necessary to establish a strategy for securing profit from a long-term perspective and systematically proceed. Especially, in the current situation where privacy violation and personal information leakage issues are being addressed, efforts should be made to induce customers' recognition of marketing using customer information and to form corporate image emphasizing security.

Structural Relationships Among Factors to Adoption of Telehealth Service (원격의료서비스 수용요인의 구조적 관계 실증연구)

  • Kim, Sung-Soo;Ryu, See-Won
    • Asia pacific journal of information systems
    • /
    • v.21 no.3
    • /
    • pp.71-96
    • /
    • 2011
  • Within the traditional medical delivery system, patients residing in medically vulnerable areas, those with body movement difficulties, and nursing facility residents have had limited access to good healthcare services. However, Information and Communication Technology (ICT) provides us with a convenient and useful means of overcoming distance and time constraints. ICT is integrated with biomedical science and technology in a way that offers a new high-quality medical service. As a result, rapid technological advancement is expected to play a pivotal role bringing about innovation in a wide range of medical service areas, such as medical management, testing, diagnosis, and treatment; offering new and improved healthcare services; and effecting dramatic changes in current medical services. The increase in aging population and chronic diseases has caused an increase in medical expenses. In response to the increasing demand for efficient healthcare services, a telehealth service based on ICT is being emphasized on a global level. Telehealth services have been implemented especially in pilot projects and system development and technological research. With the service about to be implemented in earnest, it is necessary to study its overall acceptance by consumers, which is expected to contribute to the development and activation of a variety of services. In this sense, the study aims at positively examining the structural relationship among the acceptance factors for telehealth services based on the Technology Acceptance Model (TAM). Data were collected by showing audiovisual material on telehealth services to online panels and requesting them to respond to a structured questionnaire sheet, which is known as the information acceleration method. Among the 1,165 adult respondents, 608 valid samples were finally chosen, while the remaining were excluded because of incomplete answers or allotted time overrun. In order to test the reliability and validity of the assessment scale items, we carried out reliability and factor analyses, and in order to explore the causal relation among potential variables, we conducted a structural equation modeling analysis using AMOS 7.0 and SPSS 17.0. The research outcomes are as follows. First, service quality, innovativeness of medical technology, and social influence were shown to affect perceived ease of use and perceived usefulness of the telehealth service, which was statistically significant, and the two factors had a positive impact on willingness to accept the telehealth service. In addition, social influence had a direct, significant effect on intention to use, which is paralleled by the TAM used in previous research on technology acceptance. This shows that the research model proposed in the study effectively explains the acceptance of the telehealth service. Second, the research model reveals that information privacy concerns had a insignificant impact on perceived ease of use of the telehealth service. From this, it can be gathered that the concerns over information protection and security are reduced further due to advancements in information technology compared to the initial period in the information technology industry, and thus the improvement in quality of medical services appeared to ensure that information privacy concerns did not act as a prohibiting factor in the acceptance of the telehealth service. Thus, if other factors have an enormous impact on ease of use and usefulness, concerns over these results in the initial period of technology acceptance may become irrelevant. However, it is clear that users' information privacy concerns, as other studies have revealed, is a major factor affecting technology acceptance. Thus, caution must be exercised while interpreting the result, and further study is required on the issue. Numerous information technologies with outstanding performance and innovativeness often attract few consumers. A revised bill for those urgently in need of telehealth services is about to be approved in the national assembly. As telemedicine is implemented between doctors and patients, a wide range of systems that will improve the quality of healthcare services will be designed. In this sense, the study on the consumer acceptance of telehealth services is meaningful and offers strong academic evidence. Based on the implications, it can be expected to contribute to the activation of telehealth services. Further study is needed to assess the acceptance factors for telehealth services, such as motivation to remain healthy, health care involvement, knowledge on health, and control of health-related behavior, in order to develop unique services according to the categorization of customers based on health factors. In addition, further study may focus on various theoretical cognitive behavior models other than the TAM, such as the health belief model.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

The Role of Process Systems Engineering for Sustainability in the Chemical Industries (화학공정 산업에서의 지속가능성과 공정시스템 공학)

  • Jang, Namjin;Dan, Seungkyu;Shin, Dongil;Lee, Gibaek;Yoon, En Sup
    • Korean Chemical Engineering Research
    • /
    • v.51 no.2
    • /
    • pp.221-225
    • /
    • 2013
  • Sustainability, in general, means the protection of environmental resources and economic prosperity, with the consideration of the social, economic and environmental effect, as well as human health and the enhancement of life. Profound consideration about sustainability has to handle the overall cycle of feedstock, resource extraction, transportation and production in addition to the environmental effect. Sustainable development of the chemical industries should be carried out complementarily by strengthening the chemical process safety of the industries. In this respect, chemical process safety can be called an opportunity to enhance the compatibility internationally. Changing new paradigm in chemical process safety is formed from the overall life cycle considering basic design of existing systems and production processes. To improve the chemical process safety, the integrated smart system is necessary, comprising various chemical safety database and knowledge base and improved methods of quantitative risk analysis, including management system. This paper discussed the necessity of overall life cycle in chemical process safety and proposed new technology to improve the sustainability. To develop the sustainable industries in process systems engineering, three S, which include Safety, Stability and Security, will have to be combined appropriate.

X-TOP: Design and Implementation of TopicMaps Platform for Ontology Construction on Legacy Systems (X-TOP: 레거시 시스템상에서 온톨로지 구축을 위한 토픽맵 플랫폼의 설계와 구현)

  • Park, Yeo-Sam;Chang, Ok-Bae;Han, Sung-Kook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.130-142
    • /
    • 2008
  • Different from other ontology languages, TopicMap is capable of integrating numerous amount of heterogenous information resources using the locational information without any information transformation. Although many conventional editors have been developed for topic maps, they are standalone-type only for writing XTM documents. As a result, these tools request too much time for handling large-scale data and provoke practical problems to integrate with legacy systems which are mostly based on relational database. In this paper, we model a large-scale topic map structure based on XTM 1.0 into RDB structure to minimize the processing time and build up the ontology in legacy systems. We implement a topic map platform called X-TOP that can enhance the efficiency of ontology construction and provide interoperability between XTM documents and database. Moreover, we can use conventional SQL tools and other application development tools for topic map construction in X-TOP. The X-TOP is implemented to have 3-tier architecture to support flexible user interfaces and diverse DBMS. This paper shows the usability of X-TOP by means of the comparison with conventional tools and the application to healthcare cancer ontology management.

A Study on the Development and Implementation of a Data-mining Based Prototype for Hospital Bill Claim Reduction System (데이터마이닝 기법을 활용한 의료보험 진료비청구 삭감분석시스템 개발 및 구현에 관한 연구)

  • Yoo, Sang-Jin;Park, Mun-Ro
    • Information Systems Review
    • /
    • v.7 no.1
    • /
    • pp.275-295
    • /
    • 2005
  • Changes in business environment caused by globalization of the world economy and the beginning of the knowledge society forced hospitals to equip with tools for the enhanced competitiveness. In other words, hospitals must aim three targets such as acquisition of advanced medical skills and equipments, improvement of service level for patients, and achievement of superior managerial performance simultaneously. This study has been done to suggest a way to reduce the possibility of hospital bill claim reduction as an alternative for the achievement of superior managerial performance. If the reduction rate of hospital bill claim is high, it will put negative impact on the hospital's revenue stream and hospital's reliability. Thus, if they want to stay competitive, hospitals need to device ways to cut the reduction rate as much as possible. In this study, a prototype system has been developed and implemented to check the possibility to cut the reduction rate through deep analysis of causes of reduction. The prototype first developed utilizing data mining techniques and the relation rules algorithm. Then the prototype was tested its performance using the D hospital's live data.