• Title/Summary/Keyword: 3-D Image

Search Result 5,124, Processing Time 0.036 seconds

Assessment of Metabolic Impairment in Alzheimer's Disease with [$^{18}F$]FDG PET: Validity and Role of Simplified Tissue Radioactivity Ratio Analysis (알쯔하이머병에서 양전자방출단층촬영을 이용한 국소뇌포도당대사의 변화에 관한 연구)

  • Kim, Sang-Eun;Na Duk-Lyul;Lee, Jeong-Rim;Choi, Yong;Lee, Kyung-Han;Choe Yearn-Seong;Kim, Doh-Kwan;Kim, Byung-Tae;Lee, Kwang-Ho;Kim, Seung-Tai P.
    • The Korean Journal of Nuclear Medicine
    • /
    • v.30 no.3
    • /
    • pp.299-314
    • /
    • 1996
  • The purpose of the present study was to validate the use of tissue radioactivity ratios instead of regional metabolic rates for the assessment of regional metabolic changes in Alzheimer's disease(AD) with [$^{18}F$]FDG PET and to examine the correlation of ratio indices with the severity of cognitive impairment in AD. Thirty-seven AD Patients(age $68{\pm}9 yrs$, $mean{\pm}s.d.$; 36 probable and 1 definite AD), 28 patients with dementia of non-Alzheimer type(age $66{\pm}7 yrs$), and 17 healthy controls(age $66{\pm}4 yrs$) underwent [$^{18}F$]FDG PET imaging. Two simplified radioactivity ratio indices were calculated from 37-66 min image: region-to-cerebellar radioactivity ratio(RCR) and a composite radioactivity ratio(a ratio of radioactivity in the most typically affected regions over the least typically affected regions: CRR). Local cerebral metabolic rate for glucose(LCMRglu) was also measured using a three-compartment, five-parameter tracer kinetic model. The ratio indices were significantly lower in AD patients than in controls(RCR in temporoparietal cortex, $0.949{\pm}0.136$ vs. $1.238{\pm}0.129$, p=0.0004; RCR in frontal cortex, $1.027{\pm}0.128$ vs. $1.361{\pm}0.151$, p<0.0001; CRR, $0.886{\pm}0.096$ vs. $1.032{\pm}0.042$. p=0.0024). On the RCR analysis, 86% of AD patients showed a pattern of bilateral temporoparietal hypometabolism with or without frontal involvement; hypometabolism was unilateral in 11% of the patients. When bilateral temporoparietal hypometabolism was considered to be suggestive of AD, the sensitivity and specificity of the RCR analysis for the differential diagnosis of AD were 86% and 73%, respectively. The RCR was correlated significantly with the macroparameter K [$K_1k_3/(k_2+k_3)$] (r=0.775, p<0.0001) and LCMRglu(r=0.633, p=0.0002) measured using the kinetic model. In patients with AD, both average RCR of cortical association areas and CRR were correlated with Mini-Mental Status Examination(r=0.565, p=0.0145; r=0.642, p=0.0031, respectively), Clinical Dementia Rating(r=-0.576, p=0.0124; r=-0.591, p=0.0077), and total score of Mattis Dementia Rating Scale (r=0.574, p=0.0648; r=0.737, p=0.0096). There were also significant correlations between memory and language impairments and corresponding regional RCRs. The results suggest that the [$^{18}F$]FDG PET ratio indices, RCR and CRR, reflect global and regional metabolic rates and correlate with the severity of cognitive impairment in AD. The simplified ratio analysis may be clinically useful for the differential diagnosis and serial monitoring of the disease.

  • PDF

A Study on Dose Response of MAGAT (Methacrylic Acid, Gelatin Gel and THPC) Polymer Gel Dosimeter Using X-ray CT Scanner (X-ray CT Scanner를 이용한 MAGAT (Methacrylic Acid, Gelatin Gel and THPC) 중합체 겔 선량계의 선량 반응성 연구)

  • Jung, Jae-Yong;Lee, Choong-Il;Min, Jeong-Hwan;Kim, Yon-Lae;Lee, Seong-Yong;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.1
    • /
    • pp.1-8
    • /
    • 2010
  • In this study, we evaluated the dose response of MAGAT (Methacrylic Acid Gelatin gel and THPC) normoxic polymer gel dosimeters based on the X-ray CT scanner. To perform this study, we determined the proper ratio of the gel composition and acquired X-ray scan parameters. MAGAT gel dosimeters were manufactured using MAA (MethacrylicAcid) and gelatin of various concentration, irradiated up to 20 Gy. We obtained the 20 CT images from the irradiated gel dosimeters by using on a Phillips Brilliance Big Bore CT scanner with the various scan parameters. This CT images were used to determine the $N_{CT}$-dose response, dose sensitivity and dose resolution As an amount of MAA and gelatin were increase, the slope and intercept were increase in each MAGAT gel dosimeter with various concentration of the $N_{CT}$-dose response curve. The dose sensitivity was $0.38{\pm}0.08$ to $0.859{\pm}0.1$ and increased were amount of the MAA was increased or the gelatin was decreased. However, the change of gelatin concentration was very small compare to MAA. The Dose resolution ($D_{\Delta}^{95%}$) varies considerably from 2.6 to 6 Gy, dependent on dose resolution and CT image noise. The slope and dose sensitivity was almost ident verywith the variation of the tube voltage, tube current and slice thickness in the dose response curve, but the noise (standard deviation of averamalg CT number) was decreased when the tube voltage, tube current and slice thickness are increase. The optimal MAGAT polymer gel dosimeter based on the CT were evaluated to determine the CT imaging scan parameters of the maximum tube voltage, tube current and slice thickness (commonly used in clinical) using the composition ratio of a 9% MAA, 8% gelatin and 83% water. This study could get proper composition ratio and scan parameter evaluating dose response of MAGAT normoxic polymer gel dosimeter using CT scanner.

Quantification of Myocardial Blood flow using Dynamic N-13 Ammonia PET and factor Analysis (N-13 암모니아 PET 동적영상과 인자분석을 이용한 심근 혈류량 정량화)

  • Choi, Yong;Kim, Joon-Young;Im, Ki-Chun;Kim, Jong-Ho;Woo, Sang-Keun;Lee, Kyung-Han;Kim, Sang-Eun;Choe, Yearn-Seong;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.3
    • /
    • pp.316-326
    • /
    • 1999
  • Purpose: We evaluated the feasibility of extracting pure left ventricular blood pool and myocardial time-activity curves (TACs) and of generating factor images from human dynamic N-13 ammonia PET using factor analysis. The myocardial blood flow (MBF) estimates obtained with factor analysis were compared with those obtained with the user drawn region-of-interest (ROI) method. Materials and Methods: Stress and rest N-13 ammonia cardiac PET imaging was acquired for 23 min in 5 patients with coronary artery disease using GE Advance tomograph. Factor analysis generated physiological TACs and factor images using the normalized TACs from each dixel. Four steps were involved in this algorithm: (a) data preprocessing; (b) principal component analysis; (c) oblique rotation with positivity constraints; (d) factor image computation. Area under curves and MBF estimated using the two compartment N-13 ammonia model were used to validate the accuracy of the factor analysis generated physiological TACs. The MBF estimated by factor analysis was compared to the values estimated by using the ROI method. Results: MBF values obtained by factor analysis were linearly correlated with MBF obtained by the ROI method (slope = 0.84, r = 0.91), Left ventricular blood pool TACs obtained by the two methods agreed well (Area under curve ratio: 1.02 ($0{\sim}1min$), 0.98 ($0{\sim}2min$), 0.86 ($1{\sim}2min$)). Conclusion: The results of this study demonstrates that MBF can be measured accurately and noninvasively with dynamic N-13 ammonia PET imaging and factor analysis. This method is simple and accurate, and can measure MBF without blood sampling, ROI definition or spillover correction.

  • PDF

Comparison of Helical TomoTherapy with Linear Accelerator Base Intensity-modulated Radiotherapy for Head & Neck Cases (두경부암 환자에 대한 선량체적 히스토그램에 따른 토모치료외 선형가속기기반 세기변조방사선치료의 정량적 비교)

  • Kim, Dong-Wook;Yoon, Myong-Geun;Park, Sung-Yong;Lee, Se-Byeong;Shin, Dong-Ho;Lee, Doo-Hyeon;Kwak, Jung-Won;Park, So-Ah;Lim, Young-Kyung;Kim, Jin-Sung;Shin, Jung-Wook;Cho, Kwan-Ho
    • Progress in Medical Physics
    • /
    • v.19 no.2
    • /
    • pp.89-94
    • /
    • 2008
  • TomoTherapy has a merit to treat cancer with Intensity modulated radiation and combines precise 3-D imaging from computerized tomography (CT scanning) with highly targeted radiation beams and rotating beamlets. In this paper, we comparing the dose distribution between TomoTherapy and linear accelerator based intensity modulated radiotherapy (IMRT) for 10 Head & Neck patients using TomoTherapy which is newly installed and operated at National Cancer Center since Sept. 2006. Furthermore, we estimate how the homogeneity and Normal Tissue Complication Probability (NTCP) are changed by motion of target. Inverse planning was carried out using CadPlan planning system (CadPlan R.6.4.7, Varian Medical System Inc. 3100 Hansen Way, Palo Alto, CA 94304-1129, USA). For each patient, an inverse IMRT plan was also made using TomoTherapy Hi-Art System (Hi-Art2_2_4 2.2.4.15, TomoTherapy Incorporated, 1240 Deming Way, Madson, WI 53717-1954, USA) and using the same targets and optimization goals. All TomoTherapy plans compared favorably with the IMRT plans regarding sparing of the organs at risk and keeping an equivalent target dose homogeneity. Our results suggest that TomoTherapy is able to reduce the normal tissue complication probability (NTCP) further, keeping a similar target dose homogeneity.

  • PDF

An Assessment of Post-Injection Transmission Measurement for Attenuation Correction With Rotating Pin Sources in Positron Emission Tomography (양전자방출단층촬영(PET)에서 회전 핀선원과 투과 및 방출 동시 영상 방법을 이용한 감쇠보정 방법 특성에 관한 고찰)

  • Lee, J.R.;Choi, Y.;Lee, K.H.;Kim, S.E.;Chi, D.Y.;Shin, S.A.;Kim, B.T.
    • The Korean Journal of Nuclear Medicine
    • /
    • v.29 no.4
    • /
    • pp.533-540
    • /
    • 1995
  • Attenuation correction is important in producing quantitative positron emission tomography (PET) images. Conventionally, photon attenuation effects are corrected using transmission measurements performed before tracer administration. The pre-injection transmission measurement approach may require a time delay between transmission and emission scans for the tracer studies requiring a long uptake period, about 45 minutes for F-18 deoxyglucose study. The time delay will limit patient throughput and increase the likelihood of patient motion. A technique lot performing simultaneous transmission and emission scans (T+E method) after the tracer injection has been validated. The T+E method substracts the emission counts contaminating the transmission measurements to produce accurate attenuation correction coefficients. This method has been evaluated in experiments using a cylindrical phantom filled with background water (5750 cc) containing $0.4{\mu}Ci/cc$ of F-18 fluoride ion and one insert cylinder (276 cc) containing $4.3{\mu}Ci/cc$. GE $Advance^{TM}$ PET scanner and Ge-68 rotating pin sources for transmission scanning were used for this investigation. Post-injection transmission scan and emission scan were peformed alternatively over time. The error in emission images corrected using post-infection transmission scan to emission images corrected transmission scan was 2.6% at the concentration of $1.0{\mu}Ci/cc$. No obvious differences in image quality and noise were apparent between the two images. The attenuation correction can be accomplished with post-injection transmission measurement using rotating pin sources and this method can significantly shorten the time between transmission and omission scans and thereby reduce the likelihood of patient motion and increase scanning throughput in PET.

  • PDF

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Estimation of Rice Canopy Height Using Terrestrial Laser Scanner (레이저 스캐너를 이용한 벼 군락 초장 추정)

  • Dongwon Kwon;Wan-Gyu Sang;Sungyul Chang;Woo-jin Im;Hyeok-jin Bak;Ji-hyeon Lee;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Plant height is a growth parameter that provides visible insights into the plant's growth status and has a high correlation with yield, so it is widely used in crop breeding and cultivation research. Investigation of the growth characteristics of crops such as plant height has generally been conducted directly by humans using a ruler, but with the recent development of sensing and image analysis technology, research is being attempted to digitally convert growth measurement technology to efficiently investigate crop growth. In this study, the canopy height of rice grown at various nitrogen fertilization levels was measured using a laser scanner capable of precise measurement over a wide range, and a comparative analysis was performed with the actual plant height. As a result of comparing the point cloud data collected with a laser scanner and the actual plant height, it was confirmed that the estimated plant height measured based on the average height of the top 1% points showed the highest correlation with the actual plant height (R2 = 0.93, RMSE = 2.73). Based on this, a linear regression equation was derived and used to convert the canopy height measured with a laser scanner to the actual plant height. The rice growth curve drawn by combining the actual and estimated plant height collected by various nitrogen fertilization conditions and growth period shows that the laser scanner-based canopy height measurement technology can be effectively utilized for assessing the plant height and growth of rice. In the future, 3D images derived from laser scanners are expected to be applicable to crop biomass estimation, plant shape analysis, etc., and can be used as a technology for digital conversion of conventional crop growth assessment methods.

Lung Uptake of $^{99m}Tc-sestamibi$ during Routine Gated Exercise SPECT Imaging : Comparison with Left Ventricular Ejection Fraction and Severity of Perfusion Defect (일상적인 운동 부하 게이트 심근 관류 SPECT에서 $^{99m}Tc-sestamibi$ 폐섭취 : 좌심실 구혈률과 관류 결손 정도와의 비교)

  • Jeong, Shin-Young;Lee, Jae-Tae;Bae, Jin-Ho;Ahn, Byeong-Cheol;Lee, Kyu-Bo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.83-93
    • /
    • 2003
  • Background: Lung-to-heart uptake ratio (LHR) in $^{201}Tl-chloride$ myocardial perfusion scan is believed to be a reliable marker for left ventricular (LV) dysfunction, but the clinical value of LHR is controversial for $^{99m}Tc-MIBI$ imaging. Furthermore, most of results suggesting lung uptake of $^{99m}Tc-MIBI$ as a potential marker for LV dysfunction used immediate post-stress images, instead of routine images acquired 1 hour after tracer injection. The goal of our study was to investigate whether LHR evaluated with routine gated $^{99m}Tc-MIBI$ imaging can reflect the degree of perfusion defect or left ventricular performance. Subjects and Methods: 241 patients underwent exercise $^{99m}Tc-MIBI$ myocardial SPECT were classified into normal myocardial perfusion (NP, n=135) and abnormal myocardial perfusion (AP, n=106) group according to the presence of perfusion defect. LHR was calculated from anterior projection image taken at 1-hour after injection. Two legions of interest (ROIs) were placed on left lung above LV and on myocardium showing the highest radioactivity. Subjects were classified by left ventricular ejection fraction (LVEF), as Gr-I: >50%, Gr-II: 36-50%, Gr-III: <36% and by summed stress score (SSS), as Gr-A: <4, Gr-B: 4-8, Gr-C: 9-13, Gr-D: >13, LHR was compared among these groups. Results: In NP group(n=135), LHR, were higher in men than women ($men:\;0.311{\pm}0.03,\;women:\;0.296{\pm}0.03,\;p<0.05$). Significant difference, in LHR were found between NP and AP groups both for men and women ($men:\;0.311{\pm}0.03\;vs\;.\;0.331{\pm}0.06,\;women:\;0.296{\pm}0.03\;vs.\;0.321{\pm}0.07.\;p<0.05$). There were weak negative correlation between LHR and LVEF (r=-0.342, p<0.05) and weak positive correlation between LHR and SSS (r=0.478, p<0.05) in men, but not in women (LVEF: r=-0.279, p=0.100, SSS: r=0.276, p=0.103). Increased LHR was defined when for more than mean + 2SD value ($men{\geq}0.38,\;women{\geq}0.37$) of the LHR of the subject with normal perfusion. Increased LHR were observed more frequently in subjects with lower LVEF (Gr-I: 11.1%, Gr-II: 27.0%, Gr-III: 35.4%, p<0.05) and higher SSS(Gr-A: 14.0%, Gr-B: 5.7%, Gr-C: 18.2%, Gr-D: 40.7%, p<0.05). Conclusions: LHRs obtained from routine $^{99m}Tc-MIBI$ gated SPECT images were weakly correlated with LVEF and perfusion defect. Although significant overlaps were observed between normal and abnormal perfusion group, LHRs could be used as an indirect marker of severe perfusion defect or reduced left ventricular function.

The Effect of AD Noises Caused by AD Model Selection on Brand Awareness and Brand Attitudes (광고 모델 관련 광고 노이즈가 브랜드 인지도와 브랜드 태도에 미치는 영향)

  • Chung, Jai-Hak;Lee, Sang-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.89-114
    • /
    • 2008
  • Most of the extant studies on communication effects have been devoted to the typical issue, "what types of communication activities are more effective for brand awareness or brand attitudes?" However, little research has addressed another question on communication decisions, "what makes communication activities less effective?" Our study focuses on factors negatively influenced on the efficiency of communication activities, especially of Advertising. Some studies have introduced concepts closely related to our topic such as consumer confusion, brand confusion, or belief confusion. Studies on product belief confusion have found some factors misleading consumers to misunderstand the physical features of products. Studies on brand confusion have uncovered factors making consumers confused on brand names. Studies on advertising confusion have tested the effects of ad models' employed by many other firms for different products on communication efficiency. We address a new concept, Ad noises, which are any factors interfering with consumers exposed to a particular advertisement in understanding messages provided by advertisements. The objective of this study is to understand the effects of ad noises caused by ad models on brand awareness and brand attitude. There are many different types of AD noises. Particularly, we study the effects of AD noises generated from ad model selection decision. Many companies want to employ celebrities as AD models while the number of celebrities who command a high degree of public and media attention are limited. Inevitably, several firms have been adopting the same celebrities as their AD models for different products. If the same AD model is adopted for TV commercials for different products, consumers exposed to those TV commercials are likely to fail to be aware of the target brand due to interference of TV commercials, for other products, employing the same AD model. This is an ad noise caused by employing ad models who have been exposed to consumers in other advertisements, which is the first type of ad noises studied in this research. Another type of AD noises is related to the decision of AD model replacement for the same product advertising. Firms sometimes launch another TV commercial for the same products. Some firms employ the same AD model for the new TV commercial for the same product and other firms employ new AD models for the new TV commercials for the same product. The typical problem with the replacement of AD models is the possibility of interfering with consumers in understanding messages of the TV commercial due to the dissimilarity of the old and new AD models. We studied the effects of these two types of ad noises, which are the typical factors influencing on the effect of communication: (1) ad noises caused by employing ad models who have been exposed to consumers in other advertisements and (2) ad noises caused by changing ad models with different images for same products. First, we measure the negative influence of AD noises on brand awareness and attitudes, in order to provide the importance of studying AD noises. Furthermore, our study unveiled the mediating conditions(variables) which can increase or decrease the effects of ad noises on brand awareness and attitudes. We study the effects of three mediating variables for ad noises caused by employing ad models who have been exposed to consumers in other advertisements: (1) the fit between product image and AD model image, (2) similarity between AD model images in multiple TV commercials employing the same AD model, and (3) similarity between products of which TV commercial employed the same AD model. We analyze the effects of another three mediating variables for ad noises caused by changing ad models with different images for same products: (1) the fit of old and new AD models for the same product, (2) similarity between AD model images in old and new TV commercials for the same product, and (3) concept similarity between old and new TV commercials for the same product. We summarized the empirical results from a field survey as follows. The employment of ad models who have been used in advertisements for other products has negative effects on both brand awareness and attitudes. our empirical study shows that it is possible to reduce the negative effects of ad models used for other products by choosing ad models whose images are relevant to the images of target products for the advertisement, by requiring ad models of images which are different from those of ad models in other advertisements, or by choosing ad models who have been shown in advertisements for other products which are not similar to the target product. The change of ad models for the same product advertisement can positively influence on brand awareness but positively on brand attitudes. Furthermore, the effects of ad model change can be weakened or strengthened depending on the relevancy of new ad models, the similarity of previous and current ad models, and the consistency of the previous and current ad messages.

  • PDF

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF