• Title/Summary/Keyword: 우수시스템

Search Result 5,658, Processing Time 0.033 seconds

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

The Present Status and the Preservation Method of the Rice Terrace as Scenic Sites Resources in Northeast Asia (동북아시아 계단식 논의 명승지정 현황 및 보전방안)

  • Youn, Kyung-Sook;Lee, Chang-Hun;Kim, Hyung-Dae;Seo, Woo-Hyun;Lee, Jae-Keun
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.29 no.4
    • /
    • pp.111-123
    • /
    • 2011
  • This study aims to present the basic materials, which lead us to preserve the Korea Rice Terrace as scenic sites resources and study it continuously, through researching about the present status and the preservation method of the Rice Terrace in Korea, China and Japan. The results of this study are as follows. First, The Rice Terrace has a traditional agricultural technique which minimizing the damage of the scenic view while cultivating the slope. And also, it has the value of one of the Korea unique traditional scenic views. However, The no cultivation land or disappearing desert land of rice terrace were increasing by the disadvantage of operation in land cultivation. Therefore, The Government must need preparing the base of scene resources excavation by executed the established of Korea Rice Terrace Database for preserving of Korea traditional scene. however it is getting to disappearance. And also, The High valued of Rice Terrace by cultural and scenic view which is must managed by designation of scenic sites or monument. Second, The internal and external reference book researched and analyzed results are as followings for understanding about Korea Rice Terrace feature. First of all, The Rice Terrace's dictionary meaning is just difference by each nations. However, Generally speaking that It means the terraced land by cultivated of sloped land. The Rice Terrace has cross relation with mountain valley and piedmont slope cultivation in location of condition. It occurred era is before approximately estimated from 3000 of years until 6000 of years. It can divide two type by topography shape those are slope and valley type. However, The natural element of forest has very big position in this part. But, The Rice Terrace is just managed and designated by the scenic sites with the Cultural Properties Protection Law. It must needs more binding force and effectiveness for the Rice Terrace scenic view plan establishment by scenic laws and farming and fishing village laws etc. I think that it must need the Rice Terrace related law establishment as soon as possible for efficient preservation and management of the Rice Terrace. Third, The Rice Terrace were researched and analyzed results are as followings those were executed at the Korea, China and Japan. The Korea and Japan have good Rice Terrace Characteristic. And also, The high valued scenic sites area were good managed by the Cultural Properties Protection Law as well as the superior scenic valued Rice Terrace in China. Those are also managed by designated scenic sites for protection and preservation positively. Those were managed by each autonomous district management Department. The each nation's related laws of Rice Terrace protection were just little bit different. However, The basic purpose is same. for example, it based on superior scenic view preservation and protection. Especially, The Japan's Cultural Properties Law and Scenic law linkage, and China Autonomous district legislation and effectiveness. The Korea Government must need above elements for Korea Rice Terrace culture and scenic view preservation. Fourth, We need inducing the owner system and the policy of Rice Terrace preservation promotion association for efficient preservation of Rice Terrace in japan. The owner system in japan gives the owner of the land a permission to rent the land to Rice Terrace preservation promotion association and the local government. In this system the village would be revitalized by commons in the way of the management of the terraces, beautifying the area around the terraces and etc. And also, Making the each village management operating system for Rice Terrace management through educating civilization. The civilization could receive quick help from a consultative body comprised of experts such as representatives of Cultural Heritage Administration and professors. And it is in a hurry to solve the problem of revitalization of the region by exchange between cities and the village.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

홍삼 유래 성분들의 면역조절 효능

  • Jo, Jae-Yeol
    • Food preservation and processing industry
    • /
    • v.8 no.2
    • /
    • pp.6-12
    • /
    • 2009
  • 면역반응은 외부 감염원으로부터 신체를 보호하고 외부감염원을 제거하고자 하는 주요항상성 유지기전의 하나이다. 이들 반응은 골수에서 생성되고 비장, 흉선 및 임파절 등에서 성숙되는 면역세포들에 의해 매개된다. 보통 태어나면서부터 얻어진 선천성 면역반응을 매개하는 대식세포, 수지상 세포 등과, 오랜기간 동안 감염된 다양한 면역원에 대한 경험을 토대로 얻어진 획득성 면역을 담당하는 T 임파구 등이 대표적인 면역세포로 알려져 있다. 다양한 면역질환이 최근 주요 사망률의 원인이 되고 있다. 최근, 암, 당뇨 및 뇌혈관질환 등이 생체에서 발생되는 급 만성염증에 의해 발생된다고 보고됨에 따라 면역세포 매개성 염증질환에 대한 치료제 개발을 서두르고 있다. 또한 암환자의 급격한 증가는 암발생의 주요 방어기전인 면역력 증강에 대한 요구들을 가중시키고 있다. 예로부터 사용되어 오던 고려인삼과 홍삼은 기를 보호하고 원기를 회복하는 명약으로 알려진 대표적인 우리나라 천연생약이다. 특별히, 홍삼은 단백질과 핵산의 합성을 촉진시키고, 조혈작용, 간기능 회복, 혈당강하, 운동수행 능력증대, 기억력 개선, 항피로작용 및 면역력 증대에 매우 효과가 좋은 것으로 보고되고 있다. 홍삼에 관한 많은 연구에 비해, 현재까지 홍삼이 면역력 증강에 미치는 효과에 대한 분자적 수준에서의 연구는 매우 미미한 것으로 확인되어져 있다. 홍삼의 투여는 NK 세포나 대식세포의 활성이 증가하고 항암제의 암세포 사멸을 증가시키는 것으로 확인되어졌다. 현재까지 알려진 주요 면역증강 성분은 산성다당류로 보고되었다. 또 한편으로 일부 진세노사이드류에서 항염증 효능이 확인되어졌으며, 이를 통해 피부염증 반응과 관절염에 대한 치료 효과가 있는 것으로 추측되고 있다 [본 연구는 KT&G 연구출연금 (2009-2010) 지원을 받아 이루어졌기에 이에 감사드린다]. 면역반응은 외부 감염물질의 침입으로 유도된 질병환경을 제거하고 수복하는 중요한 생체적 방어작용의 하나이다. 이들 과정은 체내로 유입된 미생물이나 미세화학물질들과 같은 독성물질을 소거하거나 파괴하는 것을 주요 역할로 한다. 외부로 부터 인체에 들어온 이물질에 대한 방어기전은 현재 두 가지 종류의 면역반응으로 구분해서 설명한다. 즉, 선천성 면역 반응 (innate immunity)과 후천성 면역 반응 (adaptive immunity)이 그것이다. 선천성 면역반응은 1) 피부나 점막의 표면과 같은 해부학적인 보호벽 구조와 2) 체온과 낮은 pH 및 chemical mediator (리소자임, collectin류) 등과 같은 생리적 방어구조, 3) phagocyte류 (대식세포, 수지상세포 및 호중구 등)에 의한 phagocytic/endocytic 방어, 그리고 4) 마지막으로 염증반응을 통한 감염에 저항하는 면역반응 등으로 구분된다. 후천성 면역반응은 획득성면역이라고도 불리고 특이성, 다양성, 기억 및 자기/비자기의 인식이라는 네 가지의 특징을 가지고 있으며, 외부 유입물질을 제거하는 반응에 따라 체액성 면역 반응 (humoral immune response)과 세포성 면역반응 (cell-mediated immune response)으로 구분된다. 체액성 면역은 침입한 항원의 구조 특이적으로 생성된 B cell 유래 항체와의 반응과 간이나 대식세포 등에서 합성되어 분비된 혈청내 보체 등에 의해 매개되는 반응으로 구성되어 있다. 세포성 면역반응은 T helper cell (CD4+), cytotoxic T cell (CD8+), B cell 및antigen presenting cell 중개를 통한 세포간 상호 작용에 의해 발생되는 면역반응이다. 선천성 면역반응의 하나인 염증은 우리 몸에서 가장 빈번히 발생되고 있는 방어작용의 하나이다. 예를 들면 감기에 걸렸을 경우, 환자의 편도선내 대식세포나 수지상세포류는 감염된 바이러스 단독 혹은 동시에 감염된 박테리아를 상대로 다양한 염증성 반응을 유도하게 된다. 또한, 상처가 생겼을 경우에도 감염원을 통해 유입된 병원성 세균과 주위조직내 선천성 면역담당 세포들 간의 면역학적 전투가 발생되게 된다. 이들 과정을 통해, 주위 세포나 조직이 손상되면, 즉각적으로 이들 면역세포들 (주로 phagocytes류)은 신속하게 손상을 극소화하고 더 나가서 손상된 부위를 원상으로 회복시키려는 일련의 염증반응을 유도하게 된다. 이들 반응은 우리가 흔히 알고 있는 발적 (redness), 부종 (swelling), 발열 (heat), 통증 (pain) 등의 증상으로 나타나게 된다. 즉, 손상된 부위 주변에 존재하는 모세혈관에 흐르는 혈류의 양이 증가하면서 혈관의 직경이 늘어나게 되고, 이로 인한 조직의 홍반과, 부어 오른 혈관에 의해 발열과 부종이 초래되는 것이다. 확장된 모세혈관의 투과성 증가는 체액과 세포들이 혈관에서 조직으로 이동하게 하는 원동력이 되고, 이를 통해 축적된 삼출물들은 단백질의 농도를 높여, 최종적으로 혈관에 존재하는 체액들이 조직으로 더 많이 이동되도록 유도하여 부종을 형성시킨다. 마지막으로 혈관 내 존재하는 면역세포들은 혈판 내벽에 점착되고 (margination), 혈관벽의 간극을 넓히는 역할을 하는 히스타민 (histamine)이나 일산화질소(nitric oxide : NO), 프로스타그린딘 (prostagladins : PGE2) 및 류코트리엔 (leukotriens) 등과 같은 chemical mediator의 도움으로 인해 혈관벽 사이로 삼출하게 되어 (extravasation), 손상된 부위로 이동하여 직접적인 외부 침입 물질의 파괴나 다른 면역세포들을 모으기 위한 cytokine (tumor necrosis factor [TNF]-$\alpha$, interleukin [IL]-1, IL-6 등) 혹은 chemokine (MIP-l, IL-8, MCP-l등)의 분비 등을 수행함으로써 염증반응을 매개하게 된다. 염증과정시 발생되는 여러 mediator 중 PGE2나 NO 및 TNF-$\alpha$ 등은 실험적 평가가 용이하여 이들 mediator 자체나 생성관련효소 (cyclooxygenase [COX] 및 nitric oxide synthase [NOS] 등)들은 현재항염증 치료제의 개발 연구시 주요 표적으로 연구되고 있다. 염증 반응은 지속기간에 따라 크게 급성염증과 만성염증으로 나뉘며, 삼출물의 종류에 따라서는 장액성, 섬유소성, 화농성 및 출혈성 염증 등으로 구분된다. 급성 염증 (acute inflammation)반응은 수일 내지 수주간 지속되는 일반적인 염증반응이라고 볼 수 있다. 국소반응은 기본징후인 발열과 발적, 부종, 통증 및 기능 상실이 특징적이며, 현미경적 소견으로는 혈관성 변화와 삼출물 형성이 주 작용이므로 일명 삼출성 염증이라고 한다. 만성 염증 (chronic inflammation)은, 급성 염증으로부터 이행되거나 만성으로 시작된다. 염증지속 기간은 보통 4주 이상 장기화 된다. 보통 염증의 경우에는 염증 생성 cytokine인 Th1 cytokine (IL-2, interferone [IFN]-$\gamma$ 및 TNF-$\alpha$ 등)의 생성 후, 거의 즉각적으로 항 염증성 cytokine인 Th2 cytokine(IL-4, IL-6, IL-10 및 transforming growth factor [TGF]-$\beta$ 등)이 생성되어 정상반응으로 회복된다. 그러나, 어떤 원인에서든 면역세포에 의한 염증원 제거 반응이 문제가 되면, 만성염증으로 진행된다. 이 반응에 주로 작용을 하는 염증세포로는 단핵구와 대식세포, 림프구, 형질세포 등이 있다. 암은 전세계적으로 사망률 1위의 원인이 되는 면역질환의 하나이다. 산화적 스트레스나 자외선 조사 혹은 암유발 물질들에 의해 염색체내 protooncogene, tumor-suppressor gene 혹은 DNA repairing gene의 일부 DNA의 돌연변이 혹은 결손 등이 발행되면 정상세포는 암화과정을 시작하게 된다. 양성세포 수준에서 약 5에서 10여년 후 악성수준의 암세포가 생성되게 되면 이들 세포는 새로운 환경을 찾아 전이하게 되는데 이를 통해 암환자들은 다양한 장기에 동인 오리진의 암세포들이 생성한 종양들을 가지게 된다. 이들 종양세포는 정상 장기의 기능을 손상시켜며 결국 생명을 잃게 만든다. 이들 염색체 수준에서의 돌연변이 유래 암세포는 거의 대부분이 체내 면역시스템에 의해 사멸되는 것으로 알려져 있다. 그러나 계속되는 스트레스나 암유발 물질의 노출은 체내 면역체계를 파괴하면서 최후의 방어선을 무너뜨리면서 암발생에 무방비 상태를 만들게 된다. 이런 이유로 체내 면역시스템의 정상적 가동 및 증강을 유도하게 하는 전략이 암예방시 매우 중요한 표적으로 인식되면서 다양한 형태의 면역증강 물질 개발을 시도하고 있다. 인삼은 두릅나무과의 여러해살이 풀로써, 오랜동안 한방 및 민간에서 원기를 회복시키고, 각종 질병을 치료할 수단으로 사용되고 있는 대표적인 전통생약이다. 예로부터 불로(不老), 장생(長生), 익기(益氣), 경신(經身)의 명약으로 구전되어졌는데, 이는 약 2천년 전 중국의 신농본초경(神農本草經)에서 "인삼은 오장(五腸)을 보하고, 정신을 안정시키고, 혼백을 고정하며 경계를 멈추게 하고, 외부로부터 침입하는 병사를 제거하여주며, 눈을 밝게 하고 마음을 열어 더욱 지혜롭게 하고 오랫동안 복용하면 몸이 가벼워지고 장수한다" 라고 기술되어있는 데에서 유래한 것이다. 다양한 연구를 통해 우리나라에서 생산되는 고려인삼 (Panax ginseng)이 효능 면에서 가장 탁월한 것으로 알려져 있으며 특별이 고려인삼으로부터 제조된 고려홍삼은 전세계적으로도 그 효능이 우수한 것으로 보고되어 있다. 대부분의 홍삼 약효는 dammarane계열의 triterpenoid인 ginsenosides라고 불리는 인삼 saponin에 의해 기인된 것으로 알려져 있다. 이들 화합물군의 기본 골격에 따라, protopanaxadiol (PD)계 (22종) 및 protopanaxatriol (PT)계 (10종)으로 구분되고 있다 (표 1). 실험적 접근을 통해 인삼의 약리작용 이해를 위한 다양한 노력들이 경주되고 있으나, 여전히 많은 부분에서 충분히 이해되고 있지 않다. 그러나, 현재까지 연구된 인삼의 약리작용 관련 연구들은 심혈관, 당뇨, 항암 및 항스트레스 등과 같은 분야에서 인삼효능이 우수한 것으로 보고하고 있다. 그러나 면역조절 및 염증현상과 관련된 최근 연구결과들은 많지 않으나, 향후 다양하게 연구될 효능부분으로 인식되고 있다.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Comparison of marginal fit before and after porcelain build-up of two kinds of CAD/CAM zirconia all-ceramic restorations (두 종류의 CAD/CAM 지르코니아 전부도재관의 도재 축성 전후의 변연적합도 비교)

  • Shin, Ho-Sik;Kim, Seok-Gyu
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.5
    • /
    • pp.528-534
    • /
    • 2008
  • Purpose: Marginal fit is one of the important components for the successful prosthodontic restoration. Poor fitting margin of the restoration causes hypersensitivity, secondary caries, and plaque accumulation, which later result in prosthodontic failure. CAD/CAM zirconia all-ceramic restorations, such as $LAVA^{(R)}$ (3M ESPE, St.Paul, MN) and $EVEREST^{(R)}$ (KaVo Dental GmbH, Biberach, Germany) systems were recently introduced in Korea. It is clinically meaningful to evaluate the changes of the marginal fit of the CAD/CAM zirconia systems before and after build-up. The purposes of this study are to compare the marginal fit of the two CAD/CAM all-ceramic systems with that of the ceramometal restoration, before and after porcelain build-up Material and methods: A maxillary first premolar dentiform tooth was prepared with 2.0 mm occlusal reduction, 1.0 mm axial reduction, chamfer margin, and 6 degree taperness in the axial wall. The prepared dentiform die was duplicated into the metal abutment die. The metal die was placed in the dental study model, and the full arch impressions of the model were made. Twenty four copings of 3 groups which were $LAVA^{(R)}$, $EVEREST^{(R)}$, and ceramometal restorations were fabricated. Each coping was cemented on the metal die with color-mixed Fit-checker $II^{(R)}$ (GC Cor., Tokyo, Japan). The marginal opening of each coping was measured with $Microhiscope^{(R)}$ system (HIROX KH-1000 ING-Plus, Seoul, Korea. X300 magnification). After porcelain build-up, the marginal openings of $LAVA^{(R)}$, $EVEREST^{(R)}$,and ceramometal restorations were also evaluated in the same method. Statistical analysis was done with paired t-test and one-way ANOVA test. Results: In coping states, the mean marginal opening for $EVEREST^{(R)}$ restorations was $52.00{\pm}11.94\;{\mu}m$ for $LAVA^{(R)}$ restorations $56.97{\pm}10.00\;{\mu}m$, and for ceramometal restorations $97.38{\pm}18.54\;{\mu}m$. After porcelain build-up, the mean marginal opening for $EVEREST^{(R)}$ restorations was $61.69{\pm}19.33\;{\mu}m$, for $LAVA^{(R)}$ restorations $70.81{\pm}12.99\;{\mu}m$, and for ceramometal restorations $1115.25{\pm}23.86\;{\mu}m$. Conclusion: 1. $LAVA^{(R)}$ and $EVEREST^{(R)}$ restorations in comparison with ceramometal restorations showed better marginal fit, which had significant differences (P < 0.05) in coping state and also after porcelain build-up . 2. The mean marginal opening values between $LAVA^{(R)}$ and $EVEREST^{(R)}$ restorations did not showed significant differences after porcelain build-up as well as in coping state (P > .05). 3. $EVEREST^{(R)}$, $LAVA^{(R)}$ and ceramometal restorations showed a little increased marginal opening after porcelain build-up, but did not show any statistical significance (P > .05).

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.