• Title/Summary/Keyword: binary system

Search Result 1,890, Processing Time 0.024 seconds

Optimization of Image Tracking Algorithm Used in 4D Radiation Therapy (4차원 방사선 치료시 영상 추적기술의 최적화)

  • Park, Jong-In;Shin, Eun-Hyuk;Han, Young-Yih;Park, Hee-Chul;Lee, Jai-Ki;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.8-14
    • /
    • 2012
  • In order to develop a Patient respiratory management system includinga biofeedback function for4-dimentional radiation therapy, this study investigated anoptimal tracking algorithmfor moving target using IR (Infra-red) camera as well as commercial camera. A tracking system was developed by LabVIEW 2010. Motion phantom images were acquired using a camera (IR or commercial). After image process were conducted to convert acquired image to binary image by applying a threshold values, several edge enhance methods such as Sobel, Prewitt, Differentiation, Sigma, Gradient, Roberts, were applied. The targetpattern was defined in the images, and acquired image from a moving targetwas tracked by matching pre-defined tracking pattern. During the matching of imagee, thecoordinateof tracking point was recorded. In order to assess the performance of tracking algorithm, the value of score which represents theaccuracy of pattern matching was defined. To compare the algorithm objectively, we repeat experiments 3 times for 5 minuts for each algorithm. Average valueand standard deviations (SD) of score were automatically calculatedsaved as ASCII format. Score of threshold only was 706, and standard deviation was 84. The value of average and SD for other algorithms which combined edge detection method and thresholdwere 794, 64 in Sobel, 770, 101 in Differentiation, 754, 85 in Gradient, 763, 75 in Prewitt, 777, 93 in Roberts, and 822, 62 in Sigma, respectively. According to score analysis, the most efficient tracking algorithm is the Sigma method. Therefore, 4-dimentional radiation threapy is expected tobemore efficient if threshold and Sigma edge detection method are used together in target tracking.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.

Cloning and Transcription Analysis of Sporulation Gene (spo5) in Schizosaccharomyces pombe (Schizosaccharomyces bombe 포자형성 유전자(spo5)의 Cloning 및 전사조절)

  • 김동주
    • The Korean Journal of Food And Nutrition
    • /
    • v.15 no.2
    • /
    • pp.112-118
    • /
    • 2002
  • Sporulation in the fission yeast Schizosaccharomyces pombe has been regarded as an important model of cellular development and differentiation. S. pombe cells proliferate by mitosis and binary fission on growth medium. Deprivation of nutrients especially nitrogen sources, causes the cessation of mitosis and initiates sexual reproduction by matting between two sexually compatible cell types. Meiosis is then followed in a diploid cell in the absence of nitrogen source. DNA fragment complemented with the mutations of sporulation gene was isolated from the S. pombe gene library constructed in the vector, pDB 248' and designated as pDB(spo5)1. We futher analyzed six recombinant plasmids, pDB(spo5)2, pDB(spo5)3, pDB(spo5)4, pDB(spo5)5, pDB (spo5)6, pDB(spo5)7 and found each of these plasmids is able to rescue the spo5-2, spo5-3, spo5-4, spo5-5, spo5-6, spo5-7 mutations, respectively. Mapping of the integrated plasmid into the homologous site of the S. pombe chromosomes demonstrated that pDB(spo5)1, and pDB(spu5)Rl contained the spo5 gene. Transcripts of spo5 gene were analyzed by Northern hybridization. Two transcripts of 3.2 kb and 2.5kb were detected with 5kb Hind Ⅲ fragment containing a part of the spo5 gene as a probe. The small mRNA(2.5kb) appeared only when a wild-type strain was cultured in the absence of nitrogen source in which condition the large mRNA (3.2kb) was produced constitutively. Appearance of a 2.5kb spo5-mRNA depends upon the function of the meil, mei2 and mei3 genes.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

A Study on the Perception Changes of Physicians toward Duty to Inform - Focusing on the Influence of the Revised Medical Law - (설명의무에 대한 의사의 인식 변화 조사 연구 -의료법 개정의 영향을 중심으로-)

  • Kim, Rosa
    • The Korean Society of Law and Medicine
    • /
    • v.19 no.2
    • /
    • pp.235-261
    • /
    • 2018
  • The Medical law stipulates regulations about the physician's duty to inform to contribute to patient's self-determination. This law was most recently revised on December 20, 2016, and came into effect on June 21, 2017. There has been much controversy about this, and it has been questioned whether or not it will be effective for physicians to comply with the duty to inform. Therefore, this study investigated perceptions of physicians of whether they observed the duty to inform and their legal judgment about that duty, and analyzed how the revision of the medical law may have affected the legal cognition of physician's duty to inform. This study was conducted through an online questionnaire survey involving 109 physicians over 2 weeks from March 29 to April 12, 2018, and 108 of the collected data were used for analysis. The questionnaire was developed by revising and supplementing the previous research (Lee, 2004). It consisted of 41 items, including 26 items related to the experience of and legal judgment about the duty to inform, 6 items related to awareness of revised medical law, and 9 items on general characteristics. The data were analyzed using SAS 9.4 program and descriptive statistics, Chi-square test, Fisher's exact test and Binary logistic regression were performed. The results are as follows. • Out of eight situations, the median number of situations that did not fulfill the duty to inform was 5 (IQR, 4-6). In addition, 12 respondents (11%) answered that they did not fulfill the duty to inform in all eight cases, while only one (1%) responded that he/she performed explanation obligations in all cases. • The median number of the legal judgment score on the duty to inform was 8 out of 13 (IQR, 7-9), and the scores ranged from a minimum of 4 (4 respondents) to a maximum of 11 (3 respondents). • More than half of the respondents (n=26, 52%) were unaware of the revision of the medical law, 27 (25%) were aware of the fact that the medical law had been revised, 20(18%) had a rough knowledge of the contents of the law, and only 5(5%) said they knew the contents of the law in detail. The level of awareness of the revised medical law was statistically significant difference according to respondents' sex (p<.49), age (p<.0001), career (p<.0001), working type (p<.024), and department (p<.049). • There was no statistically significant relationship between the level of awareness of the revised medical law and the level of legal judgment on the duty to inform. These results suggest that efforts to improve the implementation and cognition of physician's duty to inform are needed, and it is difficult to expect a direct positive effect from the legal regulations per se. Considering the distinct characteristics of medical institutions and hierarchical organizational culture of physicians, it is necessary to develop a credible guideline on the duty to inform within the medical system, and to strengthen the education of physicians about their duty to inform and its purpose.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Mechanical Alloying and the Consolidation Behavior of Nanocrystalline $Ll_2$ A$1_3$Hf Intermetallic Compounds (Cu 첨가에 따른 nanocrystalline ${Ll_2}{Al_3}Hf$ 금속간 화합물의 기계적 합금화 거동 및 진공열간 압축성형거동)

  • Kim, Jae-Il;O, Yeong-Min;Kim, Seon-Jin
    • Korean Journal of Materials Research
    • /
    • v.11 no.8
    • /
    • pp.629-635
    • /
    • 2001
  • To improve the ductility of $A1_3Hf$ intermetallics, which are the potential high temperature structural materials, the mechanical alloying behavior. the effect of Cu addition on the $Ll_2$ phase formation and the behavior of vacuum hot-pressed consolidation were investigated. During the mechanical alloying by SPEX mill, the $Ll_2 A1_3Hf$ intermetallics with the grain size of 7~8nm was formed after 6 hours of milling in Al-25at.%Hf system. The $Ll_2$ Phase of Al_3Hf$ intermetallics with the addition of 12.5at.%Cu, similar to that of the binary Al-25at.% Hf, was formed, but the milling time necessary for the formationof the $Ll_2$ phase was delayed form 6 hours to 10 hours. The lattice parameter of ternary $Ll_2(Al+Cu)_3Hf$ intermetallics decreased with the increase of Cu content. The onset temperature of $Ll_2$ to $D0_{23}$ phase in $Al_3Hf$ intermetallics was around 38$0^{\circ}C$, the temperature upon completion varied from 48$0^{\circ}C$ to 5$50^{\circ}C$ as the annealing time. The onset temperature of $Ll_2$ to $D0_{23}$ phase transformation in $(Al+ Cu)_3Hf$ intermetallics increased with the amount of Cu and the highest onset temperature of $700^{\circ}C$ was achieved by the Cu addition of 10at.%. The relative density increased from 89% to 90% with the Cu addition of 10at.% in $Al_3Hf$ intermetallics hot-pressed in vacuum under 750MPa at 40$0^{\circ}C$ for 3 hours. The relative density of 92.5% was achieved without the phase transformation and the grain growth as the consolidation temperature increased from 40$0^{\circ}C$ to 50$0^{\circ}C$ in $(Al+Cu)_3Hf$ intermetallics hot-pressed in vacuum under 750MPa for 3 hours.

  • PDF

Development of a Failure Probability Model based on Operation Data of Thermal Piping Network in District Heating System (지역난방 열배관망 운영데이터 기반의 파손확률 모델 개발)

  • Kim, Hyoung Seok;Kim, Gye Beom;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.55 no.3
    • /
    • pp.322-331
    • /
    • 2017
  • District heating was first introduced in Korea in 1985. As the service life of the underground thermal piping network has increased for more than 30 years, the maintenance of the underground thermal pipe has become an important issue. A variety of complex technologies are required for periodic inspection and operation management for the maintenance of the aged thermal piping network. Especially, it is required to develop a model that can be used for decision making in order to derive optimal maintenance and replacement point from the economic viewpoint in the field. In this study, the analysis was carried out based on the repair history and accident data at the operation of the thermal pipe network of five districts in the Korea District Heating Corporation. A failure probability model was developed by introducing statistical techniques of qualitative analysis and binomial logistic regression analysis. As a result of qualitative analysis of maintenance history and accident data, the most important cause of pipeline damage was construction erosion, corrosion of pipe and bad material accounted for about 82%. In the statistical model analysis, by setting the separation point of the classification to 0.25, the accuracy of the thermal pipe breakage and non-breakage classification improved to 73.5%. In order to establish the failure probability model, the fitness of the model was verified through the Hosmer and Lemeshow test, the independent test of the independent variables, and the Chi-Square test of the model. According to the results of analysis of the risk of thermal pipe network damage, the highest probability of failure was analyzed as the thermal pipeline constructed by the F construction company in the reducer pipe of less than 250mm, which is more than 10 years on the Seoul area motorway in winter. The results of this study can be used to prioritize maintenance, preventive inspection, and replacement of thermal piping systems. In addition, it will be possible to reduce the frequency of thermal pipeline damage and to use it more aggressively to manage thermal piping network by establishing and coping with accident prevention plan in advance such as inspection and maintenance.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.