• Title/Summary/Keyword: problem analysis

Search Result 16,360, Processing Time 0.053 seconds

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Study on the Nutritive Value and Utilization of Powdered Seaweeds (해조의 식용분말화에 관한 연구)

  • Yu, Jong-Yull;Lee, Ki-Yull;Kim, Sook-Hee
    • Journal of Nutrition and Health
    • /
    • v.8 no.1
    • /
    • pp.15-37
    • /
    • 1975
  • I. Subject of the study A study on the nutritive value and utilization of powdered seaweeds. II. Purpose and Importance of the study A. In Korea the shortage of food will be inevitable by the rapidly growing population. It will be very important study to develop a new food from the seaweeds which were not used hitherto for human consumption. B. The several kinds of seaweeds have been used by man in Korea mainly as side-dishes. However, a properly powdered seaweed will enable itself to be a good supplement or mixture to certain cereal flours. C. By adding the powdered seaweed to any cereals which have long been staple foods in this country the two fold benefits; saving of cereals and change of dietary pattern, will be secured. III. Objects and scope of the study A. Objects of the study The objects will come under four items. 1. To develop a powdered seaweed as a new food from the seaweeds which have been not used for human consumption. 2. To evaluate the nutritional quality of the products the analysis for chemical composition and animal feeding experiment will be conducted. 3. Experimental cocking and accepability test will be conducted for the powdered products to evaluate the value as food stuff. 4. Sanitary test and also economical analysis will be conducted for the powdered products. B. Scope of the study 1. Production of seaweed powders Sargassum fulvellum growing in eastern coast and Sargassum patens C.A. in southern coast were used as the material for the powders. These algae, which have been not used for human consumption, were pulverized through the processes of washing, drying, pulverization, etc. 2. Nutritional experiments a. Chemical composition Proximate components (water, protein, fat, cellulose, sugar, ash, salt), minerals (calcium, phosphorus, iron, iodine), vitamins (A, $B_1,\;B_2$ niacin, C) and amino acids were analyzed for the seaweed powders. b. Animal feeding experiment Weaning 160 rats (80 male and 80 female rats) were used as experimental animals, dividing them into 16 groups, 10 rats each group. Each group was fed for 12 weeks on cereal diet (Wheat flour, rice powder, barley powder, potato powder, corn flour) with the supplementary levels of 5%, 10%, 15%, 20% and 30% of the seaweed powder. After the feeding the growth, feed efficiency ratio, protain efficiency ratio and ,organs weights were checked and urine analysis, feces analysis and serum analysis were also conducted. 3. Experimental cooking and acceptability test a. Several basic studies were conducted to find the characteristics of the seaweed powder. b. 17 kinds of Korean dishes and 9 kinds of foreign dishes were prepared with cereal flours (wheat, rice, barley, potato, corn) with the supplementary levels of 5%, 10%, 15%, 20% and 30% of the seaweed powder. c. Acceptability test for the dishes was conducted according to plank's Form. 4. Sanitary test The heavy metals (Cd, Pb, As, Hg) in the seaweed powders were determined. 5. Economical analysis The retail price of the seaweed powder was compared with those of other cereals in the market. And also economical analysis was made from the nutritional point of view, calculating the body weight gained in grams per unit price of each feeding diet. IV. Results of the study and the suggestion for application A. Chemical composition 1. There is no any big difference in proximate components between powders of Sargassum fulvellum in eastern coast and Sargassum patens C.A. in southern coast. Seasonal difference is also not significant. Higher levels of protein, cellulose, ash and salt were found in the powders compared with common cereal foods. 2. The levels of calcium (Ca) and iron (Fe) in the powders were significantly higher than common cereal foods and also rich in iodine (I). Existence of vitamin A and vitamin C in the Powders is different point from cereal foods. Vitamin $B_1\;and\;B_2$ are also relatively rich in the powders.'Vitamin A in ·Sargassum fulvellum is high and the levels of some minerals and vitamins are seemed4 to be some influenced by seasons. 3. In the amino acid composition methionine, isoleucine, Iysine and valine are limiting amino acids. The protein qualities of Sargassum fulvellum and Sargassum patens C.A. are seemed to be .almost same and generally ·good. Seasonal difference in amino acid composition was found. B. Animal feeding experiment 1. The best growth was found at.10% supplemental level of the seaweed Powder and lower growth rate was shown at 30% level. 2. It was shown that 15% supplemental level of the Seaweed powder seems to fulfil, to some extent the mineral requirement of the animals. 3. No any changes were found in organs development except that, in kidney, there found decreasing in weight by increasing the supplemental level of the seaweed powder. 4. There is no any significant changes in nitrogen retention, serum cholesterol, serum calcium and urinary calcium in each supplemental level of the seaweed powder. 5. In animal feeding experiment it was concluded that $5%{\sim}15%$ levels supplementation of the seaweed powder are possible. C. Experimental cooking and acceptability test 1. The seaweed powder showed to be utilized more excellently in foreign cookings than in Korean cookings. Higher supplemental level of seaweed was passible in foreign cookings. 2. Hae-Jo-Kang and Jeon-Byung were more excellent than Song-Pyun, wheat cake, Soo-Je-Bee and wheat noodle. Hae-Je-Kang was excellent in its quality even as high as 5% supplemental level. 3. The higher levels of supplementation were used the more sticky cooking products were obtained. Song-Pyun and wheat cake were palatable and lustrous in 2% supplementation level. 4. In drop cookie the higher levels of supplementation, the more crisp product was obtained, compared with other cookies. 5. Corn cake, thin rice gruel, rice gruel and potato Jeon-Byung were more excellent in their quality than potato Man-Doo and potato noodle. Corn cake, thin rice gruel and rice gruel were excellent even as high as 5% supplementation level. 6. In several cooking Porducts some seaweed-oder was perceived in case of 3% or more levels of supplementation. This may be much diminished by the use of proper condiments. D. Sanitary test It seems that there is no any heavy metals (Cd, Pb, As, Hg) problem in these seaweed Powders in case these Powders are used as supplements to any cereal flours E. Economical analysis The price of the seaweed powder is lower than those of other cereals and that may be more lowered when mass production of the seaweed powder is made in future. The supplement of the seaweed powder to any cereals is also economical with the criterion of animal growth rate. F. It is recommended that these seaweed powders should be developed and used as supplement to any cereal flours or used as other food material. By doing so, both saving of cereals and improvement of individual's nutrition will greatly be achieved. It is also recommended that the feeding experiment for men would be conducted in future.

  • PDF

Land-Cover Change Detection of Western DMZ and Vicinity using Spectral Mixture Analysis of Landsat Imagery (선형분광혼합화소분석을 이용한 서부지역 DMZ의 토지피복 변화 탐지)

  • Kim, Sang-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.9 no.1
    • /
    • pp.158-167
    • /
    • 2006
  • The object of this study is to detect of land-cover change in western DMZ and vicinity. This was performed as a basic study to construct a decision support system for the conservation or a sustainable development of the DMZ and Vicinity near future. DMZ is an is 4km wide and 250km long and it's one of the most highly fortified boundaries in the world and also a unique thin green line. Environmentalists want to declare the DMZ as a natural reserve and a biodiversity zone, but nowadays through the strengthening of the inter-Korean economic cooperation, some developers are trying to construct a new-town or an industrial complex inside of the DMZ. This study investigates the current environmental conditions, especially deforestation of the western DMZ adopting remote sensing and GIS techniques. The Land-covers were identified through the linear spectvral mixture analysis(LSMA) which was used to handle the spectral mixture problem of low spatial resolution imagery of Landsat TM and ETM+ imagery. To analyze quantitative and spatial change of vegetation-cover in western DMZ, GIS overlay method was used. In LSMA, to develop high-quality fraction images, three endmembers of green vegetation(GV), soil, water were driven from pure features in the imagery. Through 15 years, from 1987 to 2002, forest of western DMZ and vicinity was devastated and changed to urban, farmland or barren land. Northern part of western DMZ and vicinity was more deforested than that of southern part. ($52.37km^2$ of North Korean forest and $39.04km^2$ of South Korean were change to other land-covers.) In case of North Korean part, forest changed to barren land and farmland and in South Korean part, forest changed to farmland and urban area. Especially, In North Korean part of DMZ and vicinity, $56.15km^2$ of farmland changed to barren land through 15 years, which showed the failure of the 'Darakbat' (terrace filed) project which is one of food increase projects in North Korea.

  • PDF

A Study on the Nurses' Contingent Employment and Related Factors (간호사의 비정규직 고용실태 및 관련요인에 관한 연구)

  • Choi, Sook-Ja
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.5 no.3
    • /
    • pp.477-500
    • /
    • 1999
  • Korean labor market has showed remarkable change of the increase in the amount of unemployment and contingent employment since IMF bailout agreement. There is a theoretical position to explain this increase in contingent employment at hospitals with the notion of flexibility. The high flexibility of employment due to the increase of contingent employees is becoming very important part in new business strategy of hospitals. The types of contingent employment of the nurse are part-time employment temporary employment, fixed-term employment, and internship which was introduced in early 1999. Recently, Korean health care industry managers have paid attention to the customer oriented service, rationalization of business administration, service quality control so that they can adjust their business to outer environment. Especially their efforts concentrate on the wage reduction through efficient and scientific control of man power because wage shares about 40% of total cost. This dissertation aims at verifying the phenomena of the contingent employment of the nurse and analyzing the related factors and problems. To rephrase these aims in ordinal: First, verifying the phenomena of contingent employment of the nurse. Second, verifying the problems of that phenomena. Third, analyzing the related factors of the contingent employment of the nurse. To accomplish these research goals, a statistical survey was executed. in which 384 questionnaires-66 for manager nurses, 318 for contingent nurses - were given to nurses working at 66 hospitals-which have at least 100 beds-in Seoul. Among them, 187 questionnaires-38 from manager nurses, 149 from contingent nurses'- 'were returned. Then, the data coded and submitted to T-test, $X^2$ -test, variance analysis(ANOVA), correlation analysis, multiple regression analysis, Logistic Regression with SAS program. The research results of the contingent nurses are followings: 1. The average career term at the present hospital 8.4 months: duty-on days per month are 24.2 days: working time per day is 7.9 hours. These results showed little difference from regular nurses. 2. Their wage level is about 70% of regular nurses except for internship nurses whose wage level is 41% of regular nurses. To break down the wage composition, part-time nurses and internship nurses get few allowance and bonus. And contingent nurses get very low level of additional pay except for fixed-term nurses who are under similar condition of employment to regular nurses. These results show that hospital managers are trying to reduce the labor cost not only through the direct way of wage reduction but through differential treatment of bonus, retirement allowance, and other additional pay. 3. The problem of contingent employment: low level of pay; high level of turn-over rate: weakening of union; low level of working condition: heavy burden of work; inhuman treatment. The contingent nurses consider these problems more seriously than manager nurses do. What manager nurses regard problematic is the absence of feeling-belonged and responsibility of the contingent nurses. 4. The factors strongly related with the rate of the number of contingent nurses for the number of regular nurses; gross turn-over nurses; average in-patients per day; staring wage of graduate from professional college: the type of hospital ownership; the number of beds; the gap between gross newcomer nurses and gross turn-over nurses. The factors related with their gross wage per month; the number of beds; applying of health insurance; applying of industrial casualty insurance; applying of yearly-paid leave; the type of hospital ownership; average out-patients per day; gross turn-over nurses. The meaningful factors which make difference by employment type: monthly-paid leave; physiological leave. The logistic regression analysis using these two factors shows that monthly-paid leave is related with the type of hospital ownership; the number of beds; average out-patient per day, and physiological leave is related with the gross newcomer nurses; gross turn-over nurses; the number of beds.

  • PDF

A Study on Qulity Perceptions and Satisfaction for Medical Service Marketing (의료서비스 마케팅을 위한 품질지각과 만족에 관한 연구)

  • Yoo, Dong-Keun
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.2 no.1
    • /
    • pp.97-114
    • /
    • 1996
  • INSTRODUCTION Service quality is, unlike goods quality, an abstract and elusive constuct. Service quality and its requirements are not easily understood by consumers, and also present some critical research problems. However, quality is very important to marketers and consumers in that it has many strategic benefits in contributing to profitability of marketing activities and consumers' problem-solving activities. Moreover, despite the phenomenal growth of medical service sector, few researchers have attempted to define and model medical service quality. Especially, little research has focused on the evaluation of medical service quality and patient satisfaction from the perspectives of both the provider and the patient. As competition intensifies and patients are demanding higher quality of medical service, medical service quality and patient satisfaction has emerged as a critical research topic. The major purpose of this article is to explore the concept of medical service quality and its evaluation from both nurse and patient perspectives. This article attempts to achieve its purpose by (1)classfying critical service attibutes into threecategories(satisfiers, hygiene factors, and performance factors). (2)measuring the relative importance of need criteria, (3)evaluating SERVPERF model and SERVQUAL model in medical service sector, and (4)identifying the relationship between perceived quality and overall patient satisfaction. METHOD Data were gathered from a sample of 217 patients and 179 nurses in Seoul-area general hospitals. From the review of previous literature, 50 survey items representing various facets of the medical service quality were developed to form a questionnaire. A five-point scale ranging from "Strongly Agree"(5) to "Strongly Disagree"(1) accompanied each statement(expectation statements, perception statements, and importance statements). To measure overall satisfaction, a seven-point scale was used, ranging from "Very Satisfied"(7) to "Very Dissatisfied"(1) with no verbal labels for scale points 2 through 6 RESULTS In explaining the relationship between perceived performance and overall satisfaction, only 31 variables out of original 50 survey items were proven to be statistically significant. Hence, a penalty-reward analysis was performed on theses 31 critical attributes to find out 17 satisfiers, 8 hygiene factors, and 4 performance factors in patient perspective. The role(category) of each service quality attribute in relation to patient satisfaction was com pared across two groups, that is, patients and nurses. They were little overlapped, suggesting that two groups had different sets of 'perceived quality' attributes. Principal components factor analyses of the patients' and nurses' responses were performed to identify the underlying dimensions for the set of performance(experience) statements. 28 variables were analyzed by using a varimax rotation after deleting three obscure variables. The number of factors to be extracted was determined by evaluating the eigenvalue scores. Six factors wereextracted, accounting for 57.1% of the total variance. Reliability analysis was performed to refine the factors further. Using coefficient alpha, scores of .84 to .65 were obtained. Individual-item analysis indicated that all statements in each of the factors should remain. On 26 attributes of 31 critical service quality attributes, there were gaps between actual patient's importance of need criteria and nurse perceptions of them. Those critical attributes could be classified into four categories based on the relative importance of need criteria and perceived performance from the perspective of patient. This analysis is useful in developing strategic plans for performance improvement. (1) top priorities(high importance and low performance) (in this study)- more health-related information -accuracy in billing - quality of food - appointments at my convenience - information about tests and treatments - prompt service of business office -adequacy of accommodations(elevators, etc) (2) current strengths(high importance and high performance) (3)unnecessary strengths(low importance and high performance) (4) low priorities(low importance and low performance) While 26 service quality attributes of SERPERF model were significantly related to patient satisfation, only 13 attributes of SERVQUAL model were significantly related. This result suggested that only experience-based norms(SERVPERF model) were more appropriate than expectations to serve as a benchmark against which service experiences were compared(SERVQUAL model). However, it must be noted that the degree of association to overall satisfaction was not consistent. There were some gaps between nurse percetions and patient perception of medical service performance. From the patient's viewpoint, "personal likability", "technical skill/trust", and "cares about me" were most significant positioning factors that contributed patient satisfaction. DISCUSSION This study shows that there are inconsistencies between nurse perceptions and patient perceptions of medical service attributes. Also, for service quality improvement, it is most important for nurses to understand what satisfiers, hygiene factors, and performance factors are through two-way communications. Patient satisfaction should be measured, and problems identified should be resolved for survival in intense competitive market conditions. Hence, patient satisfaction monitoring is now becoming a standard marketing tool for healthcare providers and its role is expected to increase.

  • PDF

A Study on the Goal-Orientation of QI Performers in the Medical Centers (의료기관 QI 담당자의 목표추구몰입에 관한 연구)

  • Kim, Mi-Sook;Park, Jae-Sung
    • The Korean Journal of Health Service Management
    • /
    • v.2 no.1
    • /
    • pp.105-124
    • /
    • 2008
  • The purpose of this research is to provide the data base for the activation of Quality Improvement operation through investigating the status of Quality Improvement operation, and finding out factors influencing on the goal-orientation of QI performers in the medical centers of more than one hundred beds where are practicing Quality Improvement operation. In order to reach the purpose, document study was carried out grounded on the proceeding researches and formulated statistical data in relation with the status of Quality Improvement performers, and proof study was carried out through questionnaire survey. The subjects of the survey were the Quality Improvement performers working in seventy three medical centers in Pusan-Gyeongnam, Daegu-Gyeongbuk, and Ulsan. Among eighty three Quality Improvement performers, fifty, five were questionnaire surveyed, on the result of which Reliability Analysis, Factor Analysis, and Multiple Regression Analysis were made, using statistical program. The the results of the proof analysis on this research are as follows. First, in the factors influencing the devoting to goal pursuit of QI performers, organization-goal contribution(0.44) had significant positive effects, while organization conflict(-0.25) had significant negative effects. In other words, the higher the organization-goal contribution was, the higher the devoting to goal pursuit was, while the less the organization conflict was, the higher the devoting to goal pursuit was, which was statistically significant.(p<0.05). Second, in the aspect of goal performance types of QI performers, the process-centered type showed high level of the devoting to goal pursuit, which was statistically significant.(p<0.05). Third, in the aspect of QI performance degree, the higher the devoting to goal pursuit was, the higher the QI performance degree was, which was statistically significant.(p<0.05). In addition, the performers who perceived their workplaces organic structure showed much higher QI performance degree, which statistically significant.(p<0.05). Generalizing the results of this research, it is possible to offer a few suggestions as follows. First, as the competition among the medical centers is more severe recently owing to medical center evaluation system, medical centers are practicing various Quality Improvement operation in all of medical services such as clinical performance and management performance, to reach the purpose of both cost-cutting and medical quality improvement. Thus in order to practice Quality Improvement operation more efficiently in medical centers, it is essential to nuke use of problem-solving methods and statistical members. This as the willingness of chief executives and positive attitude and recognition of organization members. This requires the installation of divisions in charge and disposition of persons in charge, not to speak of persistent training of Quality Improvement. Second, the divisions in charge of QI carry out Quality Improvement operation at the medical center level, and take the role of generalizing and adjusting QI performances of various departments. Owing to this role, the division in charge of QI is considered indispensable organization in the QI operation of medical centers along with medical QI committee, while it contributes to the government's goal of reducing quality level gaps among medical centers. Therefore it is necessary for government and QI organizations to give institutional support and resources for the sake of QI operation of medical centers, besides to supply systematic trainning and informations to the divisions and persons in charge of QI. Third, it is certain that disposition of persons in charge should be determined in view of the scale and the scope of QI operation in medical centers.

  • PDF

Performance Analysis of Top-K High Utility Pattern Mining Methods (상위 K 하이 유틸리티 패턴 마이닝 기법 성능분석)

  • Ryang, Heungmo;Yun, Unil;Kim, Chulhong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.89-95
    • /
    • 2015
  • Traditional frequent pattern mining discovers valid patterns with no smaller frequency than a user-defined minimum threshold from databases. In this framework, an enormous number of patterns may be extracted by a too low threshold, which makes result analysis difficult, and a too high one may generate no valid pattern. Setting an appropriate threshold is not an easy task since it requires the prior knowledge for its domain. Therefore, a pattern mining approach that is not based on the domain knowledge became needed due to inability of the framework to predict and control mining results precisely according to the given threshold. Top-k frequent pattern mining was proposed to solve the problem, and it mines top-k important patterns without any threshold setting. Through this method, users can find patterns from ones with the highest frequency to ones with the k-th highest frequency regardless of databases. In this paper, we provide knowledge both on frequent and top-k pattern mining. Although top-k frequent pattern mining extracts top-k significant patterns without the setting, it cannot consider both item quantities in transactions and relative importance of items in databases, and this is why the method cannot meet requirements of many real-world applications. That is, patterns with low frequency can be meaningful, and vice versa, in the applications. High utility pattern mining was proposed to reflect the characteristics of non-binary databases and requires a minimum threshold. Recently, top-k high utility pattern mining has been developed, through which users can mine the desired number of high utility patterns without the prior knowledge. In this paper, we analyze two algorithms related to top-k high utility pattern mining in detail. We also conduct various experiments for the algorithms on real datasets and study improvement point and development direction of top-k high utility pattern mining through performance analysis with respect to the experimental results.