• Title/Summary/Keyword: Application level

Search Result 7,442, Processing Time 0.053 seconds

The Influence Evaluation of $^{201}Tl$ Myocardial Perfusion SPECT Image According to the Elapsed Time Difference after the Whole Body Bone Scan (전신 뼈 스캔 후 경과 시간 차이에 따른 $^{201}Tl$ 심근관류 SPECT 영상의 영향 평가)

  • Kim, Dong-Seok;Yoo, Hee-Jae;Ryu, Jae-Kwang;Yoo, Jae-Sook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.67-72
    • /
    • 2010
  • Purpose: In Asan Medical Center we perform myocardial perfusion SPECT to evaluate cardiac event risk level for non-cardiac surgery patients. In case of patients with cancer, we check tumor metastasis using whole body bone scan and whole body PET scan and then perform myocardial perfusion SPECT to reduce unnecessary exam. In case of short term in patients, we perform $^{201}Tl$ myocardial perfusion SPECT after whole body bone scan a minimum 16 hours in order to reduce hospitalization period but it is still the actual condition in which the evaluation about the affect of the crosstalk contamination due to the each other dissimilar isotope administration doesn't properly realize. So in our experiments, we try to evaluate crosstalk contamination influence on $^{201}Tl$ myocardial perfusion SPECT using anthropomorphic torso phantom and patient's data. Materials and Methods: From 2009 August to September, we analyzed 87 patients with $^{201}Tl$ myocardial perfusion SPECT. According to $^{201}Tl$ myocardial perfusion SPECT yesterday whole body bone scan possibility of carrying out, a patient was classified. The image data are obtained by using the dual energy window in $^{201}Tl$ myocardial perfusion SPECT. We analyzed $^{201}Tl$ and $^{99m}Tc$ counts ratio in each patients groups obtained image data. We utilized anthropomorphic torso phantom in our experiment and administrated $^{201}Tl$ 14.8 MBq (0.4 mCi) at myocardium and $^{99m}Tc$ 44.4 MBq (1.2 mCi) at extracardiac region. We obtained image by $^{201}Tl$ myocardial perfusion SPECT without gate method application and analyzed spatial resolution using Xeleris ver 2.0551. Results: In case of $^{201}Tl$ window and the counts rate comparison result yesterday whole body bone scan of being counted in $^{99m}Tc$ window, the difference in which a rate to 24 hours exponential-functionally notes in 1:0.114 with Ventri (GE Healthcare, Wisconsin, USA), 1:0.249 after the bone tracer injection in 12 hours in 1:0.411 with 1:0.79 with Infinia (GE healthcare, Wisconsin, USA) according to a reduction a time-out was shown (Ventri p=0.001, Infinia p=0.001). Moreover, the rate of the case in which it doesn't perform the whole body bone scan showed up as the average 1:$0.067{\pm}0.6$ of Ventri, and 1:$0.063{\pm}0.7$ of Infinia. According to the phantom after experiment spatial resolution measurement result, and an addition or no and time-out of $^{99m}Tc$ administrated, it doesn't note any change of FWHM (p=0.134). Conclusion: Through the experiments using anthropomorphic torso phantom and patients data, we found that $^{201}Tl$ myocardium perfusion SPECT image later carried out after the bone tracer injection with 16 hours this confirmed that it doesn't receive notable influence in spatial resolution by $^{99m}Tc$. But this investigation is only aimed to image quality, so it needs more investigation in patient's radiation dose and exam accuracy and precision. The exact guideline presentation about the exam interval should be made of the validation test which is exact and in which it is standardized about the affect of the crosstalk contamination according to the isotope use in which it is different later on.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Study on Air Operator Certification and Safety Oversight Audit Program in light of the Convention on International Civil Aviation (시카고협약체계에서의 항공안전평가제도에 관한 연구)

  • Lee, Koo-Hee;Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.28 no.1
    • /
    • pp.115-157
    • /
    • 2013
  • Some contracting States of the Convention on International Civil Aviation (commonly known as the Chicago Convention) issue FAOC(Foreign AOC and/or Operations Specifications) and conduct various safety audits for the foreign operators. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident, it is the source of concern from the legal as well as economic perspectives. FAOC of the USA doubly burdens the other contracting States to the Chicago Convention because it is the requirement other than that prescribed by the Chicago Convention of which provisions are faithfully observed by almost all the contracting States. The Chicago Convention in its Article 33 stipulates that each contracting State recognize the validity of the certificates of airworthiness and licenses issued by other contracting States as long as they meet the minimum standards of the ICAO. Consequently, it is submitted that the unilateral action of the USA, China, Mongolia, Australia, and the Philippines issuing the FOAC to the aircraft of other States is against the Convention. It is worry some that this breach of international law is likely to be followed by the European Union which is believed to be in preparation for its own unilateral application. The ICAO established by the Chicago Convention to be in charge of safe and orderly development of the international civil aviation has been in hard work to both upgrade and emphasize the safe operation of aircraft. As the result of these endeavors, it prepared a new Annex 19 to the Chicago Convention with the title of "Safety Management" and with the applicable date 14 November 2013. It is this Annex and other ICAO documents relevant to the safety that the contracting States to the Chicago Convention have to observe. Otherwise, it is the economical burden due to probable delay in issuing the FOAC and bureaucracies combined with many different paperworks and regulations depending on where the aircraft is flown. It is exactly to avoid this type of confusion and waste that the Chicago Convention aimed at when it was adopted in 1944. The State of the operator shall establish a system for both the certification and the continued surveillance of the operator in accordance with ICAO SARPs to ensure that the required standards of operations are maintained. Certainly the operator shall meet and maintain the requirements established by the States in which it operate. The authority of a State stops where the authority of another State intervenes or where the former has yielded its power by an international agreement for the sake of international cooperation. Hence, it is not within the realm of the State to issue FAOC towards foreign operators for the reason that these foreign operators are flying in and out of the State. Furthermore, there are other safety audits such as ICAO USOAP, IATA IOSA, FAA IASA, and EU SAFA that assure the safe operation of the aircraft, but within the limit of their power and in compliance with the ICAO SARPs. If the safety level of any operator is not satisfactory, the operator could be banned to operate in the contracting States with watchful eyes until the ICAO SARPs are met. This time-honoured practice has been applied without any serious problems. Besides, we have the new Annex 19 to strengthen and upgrade with easy reference for contracting States. We don't have no reason to introduce additional burden to the States by unilateral actions of some States. These actions have to be corrected. On the other hand, when it comes to the carriage of the Personal or Pilot Log Book, the Korean regulation requiring it is in contrast with other relevant provisions of USA, USOAP, IOSA, and SAFA. The Chicago Convention requires in its Articles 29 and 34 only the carriage of the Journey Log Book and some other certificates, but do not mention the Personal Log Book at all. Paragraph 5.1.1.1 of Annex 1 to the Chicago Convention even makes it clear that the carriage in the aircraft of the Personal Log Book is not required on international flights. The unique Korean regulation in this regards giving the unnecessary burden to the national flag air carriers has to be lifted at once.

  • PDF

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF

A Study on improvement of curriculum in Nursing (간호학 교과과정 개선을 위한 조사 연구)

  • 김애실
    • Journal of Korean Academy of Nursing
    • /
    • v.4 no.2
    • /
    • pp.1-16
    • /
    • 1974
  • This Study involved the development of a survey form and the collection of data in an effort-to provide information which can be used in the improvement of nursing curricula. The data examined were the kinds courses currently being taught in the curricula of nursing education institutions throughout Korea, credits required for course completion, and year in-which courses are taken. For the purposes of this study, curricula were classified into college, nursing school and vocational school categories. Courses were directed into the 3 major categories of general education courses, supporting science courses and professional education course, and further subdirector as. follows: 1) General education (following the classification of Philip H. phoenix): a) Symbolics, b) Empirics, c) Aesthetics. 4) Synthetics, e) Ethics, f) Synoptic. 2) Supporting science: a) physical science, b) biological science, c) social science, d) behavioral science, e) Health science, f) Educations 3) Professional Education; a) basic courses, b) courses in each of the respective fields of nursing. Ⅰ. General Education aimed at developing the individual as a person and as a member of society is relatively strong in college curricula compared with the other two. a) Courses included in the category of symbolics included Korean language, English, German. Chines. Mathematics. Statics: Economics and Computer most college curricula included 20 credits. of courses in this sub-category, while nursing schools required 12 credits and vocational school 10 units. English ordinarily receives particularly heavy emphasis. b) Research methodology, Domestic affair and women & courtney was included under the category of empirics in the college curricula, nursing and vocational school do not offer this at all. c) Courses classified under aesthetics were physical education, drill, music, recreation and fine arts. Most college curricula had 4 credits in these areas, nursing school provided for 2 credits, and most vocational schools offered 10 units. d) Synoptic included leadership, interpersonal relationship, and communications, Most schools did not offer courses of this nature. e) The category of ethics included citizenship. 2 credits are provided in college curricula, while vocational schools require 4 units. Nursing schools do not offer these courses. f) Courses included under synoptic were Korean history, cultural history, philosophy, Logics, and religion. Most college curricular 5 credits in these areas, nursing schools 4 credits. and vocational schools 2 units. g) Only physical education was given every Year in college curricula and only English was given in nursing schools and vocational schools in every of the curriculum. Most of the other courses were given during the first year of the curriculum. Ⅱ. Supporting science courses are fundamental to the practice and application of nursing theory. a) Physical science course include physics, chemistry and natural science. most colleges and nursing schools provided for 2 credits of physical science courses in their curricula, while most vocational schools did not offer t me. b) Courses included under biological science were anatomy, physiologic, biology and biochemistry. Most college curricula provided for 15 credits of biological science, nursing schools for the most part provided for 11 credits, and most vocational schools provided for 8 units. c) Courses included under social science were sociology and anthropology. Most colleges provided for 1 credit in courses of this category, which most nursing schools provided for 2 creates Most vocational school did not provide courses of this type. d) Courses included under behavioral science were general and clinical psychology, developmental psychology. mental hygiene and guidance. Most schools did not provide for these courses. e) Courses included under health science included pharmacy and pharmacology, microbiology, pathology, nutrition and dietetics, parasitology, and Chinese medicine. Most college curricula provided for 11 credits, while most nursing schools provide for 12 credits, most part provided 20 units of medical courses. f) Courses included under education included educational psychology, principles of education, philosophy of education, history of education, social education, educational evaluation, educational curricula, class management, guidance techniques and school & community. Host college softer 3 credits in courses in this category, while nursing schools provide 8 credits and vocational schools provide for 6 units, 50% of the colleges prepare these students to qualify as regular teachers of the second level, while 91% of the nursing schools and 60% of the vocational schools prepare their of the vocational schools prepare their students to qualify as school nurse. g) The majority of colleges start supporting science courses in the first year and complete them by the second year. Nursing schools and vocational schools usually complete them in the first year. Ⅲ. Professional Education courses are designed to develop professional nursing knowledge, attitudes and skills in the students. a) Basic courses include social nursing, nursing ethics, history of nursing professional control, nursing administration, social medicine, social welfare, introductory nursing, advanced nursing, medical regulations, efficient nursing, nursing english and basic nursing, College curricula devoted 13 credits to these subjects, nursing schools 14 credits, and vocational schools 26 units indicating a severe difference in the scope of education provided. b) There was noticeable tendency for the colleges to take a unified approach to the branches of nursing. 60% of the schools had courses in public health nursing, 80% in pediatric nursing, 60% in obstetric nursing, 90% in psychiatric nursing and 80% in medical-surgical nursing. The greatest number of schools provided 48 crudites in all of these fields combined. in most of the nursing schools, 52 credits were provided for courses divided according to disease. in the vocational schools, unified courses are provided in public health nursing, child nursing, maternal nursing, psychiatric nursing and adult nursing. In addition, one unit is provided for one hour a week of practice. The total number of units provided in the greatest number of vocational schools is thus Ⅲ units double the number provided in nursing schools and colleges. c) In th leges, the second year is devoted mainly to basic nursing courses, while the third and fourth years are used for advanced nursing courses. In nursing schools and vocational schools, the first year deals primarily with basic nursing and the second and third years are used to cover advanced nursing courses. The study yielded the following conclusions. 1. Instructional goals should be established for each courses in line with the idea of nursing, and curriculum improvements should be made accordingly. 2. Course that fall under the synthetics category should be strengthened and ways should be sought to develop the ability to cooperate with those who work for human welfare and health. 3. The ability to solve problems on the basis of scientific principles and knowledge and understanding of man society should be fostered through a strengthening of courses dealing with physical sciences, social sciences and behavioral sciences and redistribution of courses emphasizing biological and health sciences. 4. There should be more balanced curricula with less emphasis on courses in the major There is a need to establish courses necessary for the individual nurse by doing away with courses centered around specific diseases and combining them in unified courses. In addition it is possible to develop skill in dealing with people by using the social setting in comprehensive training. The most efficient ratio of the study experience should be studied to provide more effective, interesting education Elective course should be initiated to insure a man flexible, responsive educational program. 5. The curriculum stipulated in the education law should be examined.

  • PDF

Triptolide-induced Transrepression of IL-8 NF-${\kappa}B$ in Lung Epithelial Cells (폐상피세포에서 Triptolide에 의한 NF-${\kappa}B$ 의존성 IL-8 유전자 전사활성 억제기전)

  • Jee, Young-Koo;Kim, Yoon-Seup;Yun, Se-Young;Kim, Yong-Ho;Choi, Eun-Kyoung;Park, Jae-Seuk;Kim, Keu-Youl;Chea, Gi-Nam;Kwak, Sahng-June;Lee, Kye-Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.50 no.1
    • /
    • pp.52-66
    • /
    • 2001
  • Background : NF-${\kappa}B$ is the most important transcriptional factor in IL-8 gene expression. Triptolide is a new compound that recently has been shown to inhibit NF-${\kappa}B$ activation. The purpose of this study is to investigate how triptolide inhibits NF-${\kappa}B$-dependent IL-8 gene transcription in lung epithelial cells and to pilot the potential for the clinical application of triptolide in inflammatory lung diseases. Methods : A549 cells were used and triptolide was provided from Pharmagenesis Company (Palo Alto, CA). In order to examine NF-${\kappa}B$-dependent IL-8 transcriptional activity, we established stable A549 IL-8-NF-${\kappa}B$-luc. cells and performed luciferase assays. IL-8 gene expression was measured by RT-PCR and ELISA. A Western blot was done for the study of $I{\kappa}B{\alpha}$ degradation and an electromobility shift assay was done to analyze NF-${\kappa}B$ DNA binding. p65 specific transactivation was analyzed by a cotransfection study using a Gal4-p65 fusion protein expression system. To investigate the involvement of transcriptional coactivators, we perfomed a transfection study with CBP and SRC-1 expression vectors. Results : We observed that triptolide significantly suppresses NF-${\kappa}B$-dependent IL-8 transcriptional activity induced by IL-$1{\beta}$ and PMA. RT-PCR showed that triptolide represses both IL-$1{\beta}$ and PMA-induced IL-8 mRNA expression and ELISA confirmed this triptolide-mediated IL-8 suppression at the protein level. However, triptolide did not affect $I{\kappa}B{\alpha}$ degradation and NF-$_{\kappa}B$ DNA binding. In a p65-specific transactivation study, triptolide significantly suppressed Gal4-p65T Al and Gal4-p65T A2 activity suggesting that triptolide inhibits NF-${\kappa}B$ activation by inhibiting p65 transactivation. However, this triptolide-mediated inhibition of p65 transactivation was not rescued by the overexpression of CBP or SRC-1, thereby excluding the role of transcriptional coactivators. Conclusions : Triptolide is a new compound that inhibits NF-${\kappa}B$-dependent IL-8 transcriptional activation by inhibiting p65 transactivation, but not by an $I{\kappa}B{\alpha}$-dependent mechanism. This suggests that triptolide may have a therapeutic potential for inflammatory lung diseases.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.