• Title/Summary/Keyword: time system

Search Result 53,518, Processing Time 0.081 seconds

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

Mid-term results of IntracardiacLateral Tunnel Fontan Procedure in the Treatment of Patients with a Functional Single Ventricle (기능적 단심실 환자에 대한 심장내 외측통로 폰탄술식의 중기 수술성적)

  • 이정렬;김용진;노준량
    • Journal of Chest Surgery
    • /
    • v.31 no.5
    • /
    • pp.472-480
    • /
    • 1998
  • We reviewed the surgical results of intracardiac lateral tunnel Fontan procedure for the repair of functional single ventricles. Between 1990 and 1996, 104 patients underwent total cavopulmonary anastomosis. Patients' age and body weight averaged 35.9(range 10 to 173) months and 12.8(range 6.5 to 37.8) kg. Preoperative diagnoses included 18 tricuspid atresias and 53 double inlet ventricles with univentricular atrioventricular connection and 33 other complex lesions. Previous palliative operations were performed in 50 of these patients, including 37 systemic to pulmonary artery shunts, 13 pulmonary artery bandings, 15 surgical atrial septectomies, 2 arterial switch procedures, 2 resections of subaortic conus, 2 repairs of total anomalous pulmonary venous connection and 1 Damus-Stansel-Kaye procedure. In 19 patients bidirectional cavopulmonary shunt operation was performed before the Fontan procedure and in 1 patient a Kawashima procedure was required. Preoperative hemodynamics revealed a mean pulmonary artery pressure of 14.6(range 5 to 28) mmHg, a mean pulmonary vascular resistance of 2.2(range 0.4 to 6.9) wood-unit, a mean pulmonary to systemic flow ratio of 0.9(range 0.3 to 3.0), a mean ventricular end-diastolic pressure of 9.0 (range 3.0 to 21.0) mmHg, and a mean arterial oxygen saturation of 76.0(range 45.6 to 88.0)%. The operative procedure consisted of a longitudinal right atriotomy 2cm lateral to the terminal crest up to the right atrial auricle, followed by the creation of a lateral tunnel connecting the orifices of either the superior caval vein or the right atrial auricle to the inferior caval vein, using a Gore-Tex vascular graft with or without a fenestration. Concomitant procedures at the time of Fontan procedure included 22 pulmonary artery angioplasties, 21 atrial septectomies, 4 atrioventricular valve replacements or repairs, 4 corrections of anomalous pulmonary venous connection, and 3 permanent pacemaker implantations. In 31, a fenestration was created, and in 1 an adjustable communication was made in the lateral tunnel pathway. One lateral tunnel conversion was performed in a patient with recurrent intractable tachyarrhythmia 4 years after the initial atriopulmonary connection. Post-extubation hemodynamic data revealed a mean pulmonary artery pressure of 12.7(range 8 to 21) mmHg, a mean ventricular end-diastolic pressure of 7.6(range 4 to 12) mmHg, and a mean room-air arterial oxygen saturation of 89.9(range 68 to 100) %. The follow-up duration was, on average, 27(range 1 to 85) months. Post-Fontan complications included 11 prolonged pleural effusions, 8 arrhythmias, 9 chylothoraces, 5 of damage to the central nervous system, 5 infectious complications, and 4 of acute renal failure. Seven early(6.7%) and 5 late(4.8%) deaths occured. These results proved that the lateral tunnel Fontan procedure provided excellent hemodynamic improvements with acceptable mortality and morbidity for hearts with various types of functional single ventricle.

  • PDF

Comparative Analysis of Delivery Management in Various Medical Facilities (의료기관별 분만관리 양상의 비교 분석)

  • Park, Jung-Han;You, Young-Sook;Kim, Jang-Rak
    • Journal of Preventive Medicine and Public Health
    • /
    • v.22 no.4 s.28
    • /
    • pp.555-577
    • /
    • 1989
  • This study was conducted to compare the delivery management including laboratory tests, medication and surgical procedures for the delivery in various medical facilities. Two university hospitals, two general hospitals, three hospitals, two private obstetric clinics, and two midwifery clinics in a large city were selected as they permitted the investigators to abstract the required data from the medical and accounting records. The total number of deliveries occurred at these 11 facilities between 15 January and 15 February, 1989 was 789 among which 606(76.8%) were vaginal deliveries and 183 (23.3%) were C-sections. For the normal vaginal deliveries, CBC, Hb/Hct level, blood typing, VDRL, hepatitis B antigen and antibody, and urinalysis were routinely done except the private clinics and midwifery clinics which did not test for hepatitis B and Hb/Hct level at all. In one university hospital ultrasonography was performed in 71.4% of the mothers and in one general hospital liver function test was done in 76.7% of the mothers. For the C-section, chest X-ray, bleeding/clotting time and liver function test were routinely done in addition to the routine tests for the normal vaginal deliveries. Episiotomy was performed in 97.2% of the vaginal deliveries. The type and duration of fluid infused and antibiotics administered showed a wide variation among the medical facilities. In one university hospital antibiotics was not administered after C-section at all while in the general hospitals and hospitals one or two antibiotics were administered for one week on the average. In one private clinic one pint of whole blood was transfused routinely. A wide variation was observed among the medical facilities in the use of vitamin, hemostatics, oxytocics, antipyreptics, analgesics, anti-inflammatory agents. sedatives. digestives. stool softeners. antihistamines. and diuretics. Mean hospital day for the normal vaginal deliveries of primipara was 2.6 days with little variation except one hospital with 3.5 days. Mean hospital day for the C-section of primipara was 7.5 days and that of multipara was 7.6 days and it ranged between 6.5 days and 9.4 days. Average hospital fee for a normal vaginal delivery without the medical insurance coverage was 182,100 Won for the primipara and 167,300 Won for the multipara. In case of the primipara covered by the medical insurance a mother paid 82,400 Won and a multiparous mother paid 75,600 Won. Average hospital fee for a C-section without the medical insurance was 946,500 Won for the primipara and 753,800 Won for the multipara. In case of the primipara covered by the medical insurance a mother paid 256,200 Won and a multiparous mother paid 253,700 Won. Average hospital fee for a normal vaginal delivery in the university hospitals showed a remarkable difference, 268,000 Won vs 350,000 Won, as well as for the C-section. A wide variation in the laboratory tests performed for a normal vaginal delivery and a C-section as well as in the medication and hospital days brought about a big difference in the hospital fee and some hospitals were practicing the case payment system. Thus, standardization of the medical care to a certain level is warranted for the provision of adequate medical care for delivery.

  • PDF

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

Evaluation of Combine IGRT using ExacTrac and CBCT In SBRT (정위적체부방사선치료시 ExacTrac과 CBCT를 이용한 Combine IGRT의 유용성 평가)

  • Ahn, Min Woo;Kang, Hyo Seok;Choi, Byoung Joon;Park, Sang Jun;Jung, Da Ee;Lee, Geon Ho;Lee, Doo Sang;Jeon, Myeong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.201-208
    • /
    • 2018
  • Purpose : The purpose of this study is to compare and analyze the set-up errors using the Combine IGRT with ExacTrac and CBCT phased in the treatment of Stereotatic Body Radiotherapy. Methods and materials : Patient who were treated Stereotatic Body Radiotherapy in the ulsan university hospital from May 2014 to november 2017 were classified as treatment area three brain, nine spine, three pelvis. First using ExacTrac Set-up error calibrated direction of Lateral(Lat), Longitudinal(Lng), Vertical(Vrt), Roll, Pitch, Yaw, after applied ExacTrac moving data in addition to use CBCT and set-up error calibrated direction of Lat, Lng, Vrt, Rotation(Rtn). Results : When using ExacTrac, the error in the brain region is Lat $0.18{\pm}0.25cm$, Lng $0.23{\pm}0.04cm$, Vrt $0.30{\pm}0.36cm$, Roll $0.36{\pm}0.21^{\circ}$, Pitch $1.72{\pm}0.62^{\circ}$, Yaw $1.80{\pm}1.21^{\circ}$, spine Lat $0.21{\pm}0.24cm$, Lng $0.27{\pm}0.36cm$, Vrt $0.26{\pm}0.42cm$, Roll $1.01{\pm}1.17^{\circ}$, Pitch $0.66{\pm}0.45^{\circ}$, Yaw $0.71{\pm}0.58^{\circ}$, pelvis Lat $0.20{\pm}0.16cm$, Lng $0.24{\pm}0.29cm$, Vrt $0.28{\pm}0.29cm$, Roll $0.83{\pm}0.21^{\circ}$, Pitch $0.57{\pm}0.45^{\circ}$, Yaw $0.52{\pm}0.27^{\circ}$ When CBCT is performed after the couch movement, the error in brain region is Lat $0.06{\pm}0.05cm$, Lng $0.07{\pm}0.06cm$, Vrt $0.00{\pm}0.00cm$, Rtn $0.0{\pm}0.0^{\circ}$, spine Lat $0.06{\pm}0.04cm$, Lng $0.16{\pm}0.30cm$, Vrt $0.08{\pm}0.08cm$, Rtn $0.00{\pm}0.00^{\circ}$, pelvis Lat $0.06{\pm}0.07cm$, Lng $0.04{\pm}0.05cm$, Vrt $0.06{\pm}0.04cm$, Rtn $0.0{\pm}0.0^{\circ}$. Conclusion : Combine IGRT with ExacTrac in addition to CBCT during Stereotatic Body Radiotherapy showed that it was possible to reduce the set-up error of patients compared to single ExacTrac. However, the application of Combine IGRT increases patient set-up verification time and absorption dose in the body for image acquisition. Therefore, depending on the patient's situation that using Combine IGRT to reduce the patient's set-up error can increase the radiation treatment effectiveness.

  • PDF

Antioxidative Activity and Component Analysis of Psidium guajava Leaf Extracts (구아바 잎 추출물의 항산화 활성과 성분 분석)

  • Yang, Hee-Jung;Kim, Eun-Hee;Park, Soo-Nam
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.34 no.3
    • /
    • pp.233-244
    • /
    • 2008
  • In this study, the antioxidative effects, inhibitory effects on elastase and tyrosinase, and component analysis of Psidium guajava leaf extracts were investigated. The free radical (1,1-diphenyl-2-picrylhydrazyl, DPPH) scavenging activities $(FSC_{50})$ of extract/fractions of Psidium guajava leaf were in the order: 50% ethanol extract $(7.05{\mu}g/mL)$ < ethyl acetate fraction $(3.36{\mu}g/mL)$ < deglycosylated flavonoid aglycone fraction $(3.24{\mu}g/mL)$. Reactive oxygen species (ROS) scavenging activities $(OSC_{50})$ of some Psidium guajava leaf extracts on ROS generated in $Fe^{3+}-EDTA/H_2O_2$ system were investigated using the luminol-dependent chemiluminescence assay. The order of ROS scavenging activities were 50% ethanol extract $(OSC_{50},\;2.17{\mu}g/mL)$ < ethyl acetate fraction $(0.64{\mu}g/mL)$ < deglycosylated flavonoid aglycone fraction $(3.39{\mu}g/mL)$. Aglycone fraction showed the most prominent ROS scavenging activity. The protective effects of extract/fractions of Psidium guajava leaf on the rose-bengal sensitized photohemolysis of human erythrocytes were investigated. The Psidium guajava leaf extracts suppressed photohemolysis in a concentration dependent manner $(1{\sim}10{\mu}g/mL)$, particularly deglycosylated flavonoid aglycone fraction exhibited the most prominent celluar protective effect ${\tau}_{50}\;107.5min\;at\;1{\mu}g/mL)$. Aglycone fraction obtained from the deglycosylation reaction of ethyl acetate fraction among the Psidium guajava leaf extracts, showed 1 band in TLC and 1 peak in HPLC experiments (360 nm). One component was identified as quercetin. TLC chromatogram of ethyl acetate fraction of Psidium guajava leaf extract revealed 5 bands and HPLC chromatogram showed 5 peaks, which were identified as quercetin 3-O-gentobioside (10.32%) , quercetin 3-O-${\beta}$-D-glucoside (isoquercitin, 13.30%), quercetin 3-O-${\beta}$-D-galactoside (hyperin, 11.34%), quercetin 3-O-${\alpha}$-L-arabinoside (guajavarin, 19.70%), quercetin 3-O-${\beta}$-L-rhamnoside (quercitrin, 45.33%) in the order of elution time. The inhibitory effect of Psidium guajava leaf extracts on tyrosinase were investigated to assess their whitening efficacy. Finally, their anti-elastase activities were measured to predict the anti-wrinkle efficacy in the human skin. Inhibitory effects $(IC_{50})$ on tyrosinase of some Psidium guajava leaf extracts was 50% ethanol extract $(149.67{\mu}g/mL)$ < ethylacetate fraction $(30.67{\mu}g/mL)$ < deglycosylated aglycone fraction $(17.10{\mu}g/mL)$. Inhibitory effects $(IC_{50})$ on elastase of some Psidium guajava leaf extracts was 50% ethanol extract $(6.60{\mu}g/mL)$ < deglycosylated aglycone fraction $(5.66{\mu}g/mL)$ < ethylacetate fraction $(3.44{\mu}g/mL)$. These results indicate that extract/fractions of Psidium guajava leaf can function as antioxidants in bioloigcal systems, particularly skin exposed to UV radiation by scavenging $^1O_2$ and other ROS, and protect cellular membranes against ROS. And component analysis of Psidium guajava leaf extract and inhibitory activity on elastase of the aglycone fraction could be applicable to new functional cosmetics for smoothing wrinkles.

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.