• Title/Summary/Keyword: 선형 판별 분석

Search Result 194, Processing Time 0.024 seconds

On Optimizing Dissimilarity-Based Classifier Using Multi-level Fusion Strategies (다단계 퓨전기법을 이용한 비유사도 기반 식별기의 최적화)

  • Kim, Sang-Woon;Duin, Robert P. W.
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.15-24
    • /
    • 2008
  • For high-dimensional classification tasks, such as face recognition, the number of samples is smaller than the dimensionality of the samples. In such cases, a problem encountered in linear discriminant analysis-based methods for dimension reduction is what is known as the small sample size (SSS) problem. Recently, to solve the SSS problem, a way of employing a dissimilarity-based classification(DBC) has been investigated. In DBC, an object is represented based on the dissimilarity measures among representatives extracted from training samples instead of the feature vector itself. In this paper, we propose a new method of optimizing DBCs using multi-level fusion strategies(MFS), in which fusion strategies are employed to represent features as well as to design classifiers. Our experimental results for benchmark face databases demonstrate that the proposed scheme achieves further improved classification accuracies.

Real-Time Face Recognition System using PDA (PDA를 이용한 실시간 얼굴인식 시스템 구현)

  • Kwon Man-Jun;Yang Dong-Hwa;Go Hyoun-Joo;Kim Jin-Whan;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.649-654
    • /
    • 2005
  • In this paper, we describe an implementation of real-time face recognition system under ubiquitous computing environments. First, face image is captured by PDA with CMOS camera and then this image with user n and name is transmitted via WLAN(Wireless LAN) to the server and finally PDA receives verification result from the server The proposed system consists of server and client parts. Server uses PCA and LDA algorithm which calculates eigenvector and eigenvalue matrices using the face images from the PDA at enrollment process. And then, it sends recognition result using Euclidean distance at verification process. Here, captured image is first compressed by the wave- let transform and sent as JPG format for real-time processing. Implemented system makes an improvement of the speed and performance by comparing Euclidean distance with previously calculated eigenvector and eignevalue matrices in the learning process.

Parallel clustering technology for real-time LWIR band image processing (실시간 LWIR 밴드 영상 처리를 위한 병렬 클러스터링 기술)

  • Cho, Yongjin;Lee, Kyou-seung;Hong, Seongha;Oh, Jong-woo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.158-158
    • /
    • 2017
  • 비닐포장 하부에 위치한 콩의 생장 초기에 발생한 초엽을 인식하기 위한 연구를 수행중이다. 선행 연구에서 비닐포장에 접촉한 콩 초엽으로 인해 비닐포장 상부 표면의 열 반응 분포에 변화가 있음을 발견하였다. 현장에서 주행 중에 콩 초엽의 위치를 실시간으로 인식하고 연동된 선형 또는 회전형 엑츄에이터를 제어하여 정확한 위치에 천공을 수행하기 위해서는 계측 시스템과 제어 시스템간의 시간적 차이를 최소할 수 있는 실시간 신호 처리 기술이 필수적이다. 선행 연구에서 사용한 다중 IR 센서의 분해능은 $16{\times}4pixel$이며 주파수는 3 Hz로, 폭이 30cm 내외인 비닐포장 상부의 정밀 분석에 한계가 있음을 발견하였다. 이를 해결하기 위하여 분해능과 계측 주기를 개선할 수 있는 초소형 ($1cm{\times}1cm{\times}1cm$) 열화상 센서를 이용하였다. LWIR(Longwave infrared)영역에 해당하는 $8{\mu}m{\sim}14{\mu}m$의 영역에서 $0.05^{\circ}C$의 분해능을 보이는 $ Lepton^{TM}$ (500-0690-00, FLIR, Goleta, CA)모델을 사용하였다. 프레임당 $80{\times}60$ 픽셀의 정보가 2 Byte의 단위로 계측이 되며 9 Hz의 주파수로 대상면의 열 분포를 측정할 수 있다. 이론적으로 초당 정보 전송량은 86,400 Byte ($80{\times}60{\times}2{\times}9$)이며, 1 m를 진행하는 주행형 천공기에 적용할 경우 1 프레임당 10cm 정도의 면적을 측정하므로, 최대 위치 판정 분해능은 약 10 cm / 60 pixel = 0.17 cm/pixel로 상대적으로 정밀한 위치 판별이 가능하다. $80{\times}60{\times}2Byet$의 정보를 0.1초 이내에 분석해야 하는 기술적 과제를 해결하기 위하여 천공 작업기에 적합한 상용 SBC(Single board computer)의 클럭 속도(1 Ghz)로 처리 가능한 공간 분포 분석 알고리즘을 개발하였다. 전체 이미지 도메인을 한 번에 분석하는데 소요되는 시간을 최소화하기 위하여 공간정보 행렬을 균등히 배분하고 별도의 프로세서에서 Feature를 분석한 후 개별 프로세서의 결과를 경합식으로 판정하는 기술을 연구하였다. 오픈 소스인 MPICH(www.mpich.org) 라이브러리를 이용하여 개발한 신호 분석 프로그램을 클러스터링으로 연동된 개별 코어에 설치/수행 하였다. 2D 행렬인 열분포 정보를 공간적으로 균등 분배하여 개별 코어에서 행렬의 Spatial domain analysis를 수행하였다. $20{\times}20$의 클러스터링 단위를 이용할 경우 총 12개의 코어가 필요하였으며, 초당 10회의 연산이 가능함을 확인하였다. 병렬 클러스터링 기술을 이용하여 1m/s 내외의 주행 속도에 대응이 가능한 비닐포장 상부 열 분포 분석 시스템을 구현하였다.

  • PDF

Scientific analysis of the glass from Hwangnam-daech'ong Tomb No. 98 (황남대총(皇南大塚) 98호분 출토 유리(琉璃)의 과학적(科學的) 분석(分析))

  • Jo, Kyung-mi;Yu, Hei-sun;Kang, Hyung-tae
    • Conservation Science in Museum
    • /
    • v.1
    • /
    • pp.61-74
    • /
    • 1999
  • Elemental analysis of 40 glass samples from the Northern Tomb and the Southern Tomb of Hwangnam-daech'ong No. 98 was performed. Fourteen compositions of each sample were analyzed quantitatively by SEM-EDS and glass samples were classified by multivariate analysis such as PCA. All of 40 samples were confirmed to be Na2O-CaO-SiO2 system with about 20% of Na2O. Samples were classified into two groups by doing PCA on concentrations of 5 major elements(SiO2, Al2O3, Na2O, CaO and K2O). Samples included in group I showed the concentration of Al2O3 is about 9.7% and that of CaO, about 2.2%. In group II, concentration of Al2O3 is about 3.2% and that of CaO, about 4.9%. Especially yellow grains embedded in sample No. 12 were shown to be PbSnO3 by micro XRD, which was the first coloring material ever found in Korea. Lead isotope ratios of samples No. 12 and No. 17 which contained lead were measured by TIMS. The origin of lead was traced by means of multivariate analysis such as SLDA. The result showed that lead from southern China and southern Korea had been used for making glass.

Motor Imagery Brain Signal Analysis for EEG-based Mouse Control (뇌전도 기반 마우스 제어를 위한 동작 상상 뇌 신호 분석)

  • Lee, Kyeong-Yeon;Lee, Tae-Hoon;Lee, Sang-Yoon
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.2
    • /
    • pp.309-338
    • /
    • 2010
  • In this paper, we studied the brain-computer interface (BCI). BCIs help severely disabled people to control external devices by analyzing their brain signals evoked from motor imageries. The findings in the field of neurophysiology revealed that the power of $\beta$(14-26 Hz) and $\mu$(8-12 Hz) rhythms decreases or increases in synchrony of the underlying neuronal populations in the sensorymotor cortex when people imagine the movement of their body parts. These are called Event-Related Desynchronization / Synchronization (ERD/ERS), respectively. We implemented a BCI-based mouse interface system which enabled subjects to control a computer mouse cursor into four different directions (e.g., up, down, left, and right) by analyzing brain signal patterns online. Tongue, foot, left-hand, and right-hand motor imageries were utilized to stimulate a human brain. We used a non-invasive EEG which records brain's spontaneous electrical activity over a short period of time by placing electrodes on the scalp. Because of the nature of the EEG signals, i.e., low amplitude and vulnerability to artifacts and noise, it is hard to analyze and classify brain signals measured by EEG directly. In order to overcome these obstacles, we applied statistical machine-learning techniques. We could achieve high performance in the classification of four motor imageries by employing Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA) which transformed input EEG signals into a new coordinate system making the variances among different motor imagery signals maximized for easy classification. From the inspection of the topographies of the results, we could also confirm ERD/ERS appeared at different brain areas for different motor imageries showing the correspondence with the anatomical and neurophysiological knowledge.

  • PDF

Wavelet based Fuzzy Integral System for 3D Face Recognition (퍼지적분을 이용한 웨이블릿 기반의 3차원 얼굴 인식)

  • Lee, Yeung-Hak;Shim, Jae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.616-626
    • /
    • 2008
  • The face shape extracted by the depth values has different appearance as the most important facial feature information and the face images decomposed into frequency subband are signified personal features in detail. In this paper, we develop a method for recognizing the range face images by combining the multiple frequency domains for each depth image and depth fusion using fuzzy integral. For the proposed approach, the first step tries to find the nose tip that has a protrusion shape on the face from the extracted face area. It is used as the reference point to normalize for orientated facial pose and extract multiple areas by the depth threshold values. In the second step, we adopt as features for the authentication problem the wavelet coefficient extracted from some wavelet subband to use feature information. The third step of approach concerns the application of eigenface and Linear Discriminant Analysis (LDA) method to reduce the dimension and classify. In the last step, the aggregation of the individual classifiers using the fuzzy integral is explained for extracted coefficient at each resolution level. In the experimental results, using the depth threshold value 60 (DT60) show the highest recognition rate among the regions, and the depth fusion method achieves 98.6% recognition rate, incase of fuzzy integral.

Seismic Fragility Evaluation of Cabinet Panel by Nonlinear Time History Analysis (비선형시간이력해석을 이용한 수배전반의 지진취약도 도출)

  • Moon, Jong-Yoon;Kwon, Min-ho;Kim, Jin-Sup;Lim, Jeong-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.50-55
    • /
    • 2018
  • Earthquakes are almost impossible to predict and take place in a short time. In addition, there is little time to take aggressive action when an earthquake occurs. Therefore, there are more casualties and property damage than with other natural disasters. Recently, earthquakes have been occurring all over the world. As the number of earthquakes increase, studies on the safety of structures are being carried out. On the other hand, there are few studies on the electric facilities, which are relatively non - structural factors. Currently, electrical equipment in Korea is often not designed for earthquake safety and is quite vulnerable to damage when an earthquake occurs. Therefore, in this study, modeling was conducted through ABAQUS similar to an actual cabinet panel and 3D dynamic nonlinear analysis was performed using a natural seismic. According to seismic zone I and normal ground rock conditions of the power transmission and transmission facility seismic design practical guide, the maximum response acceleration of the performance level was 0.157g. In this study, however, it was not safe to reach the limit state of 30% of the analytical result at 0.1g for the general cabinet panel. From the results, the seismic fragility curve was derived and analyzed. The derived seismic fragility curve is presented as a quantitative basis for determining the limit state of the cabinet panel and can be utilized as basic data in related research.

Development of a Spectrum Analysis Software for Multipurpose Gamma-ray Detectors (감마선 검출기를 위한 스펙트럼 분석 소프트웨어 개발)

  • Lee, Jong-Myung;Kim, Young-Kwon;Park, Kil-Soon;Kim, Jung-Min;Lee, Ki-Sung;Joung, Jin-Hun
    • Journal of radiological science and technology
    • /
    • v.33 no.1
    • /
    • pp.51-59
    • /
    • 2010
  • We developed an analysis software that automatically detects incoming isotopes for multi-purpose gamma-ray detectors. The software is divided into three major parts; Network Interface Module (NIM), Spectrum Analysis Module (SAM), and Graphic User Interface Module (GUIM). The main part is SAM that extracts peak information of energy spectrum from the collected data through network and identifies the isotopes by comparing the peaks with pre-calibrated libraries. The proposed peak detection algorithm was utilized to construct libraries of standard isotopes with two peaks and to identify the unknown isotope with the constructed libraries. We tested the software by using GammaPro1410 detector developed by NuCare Medical Systems. The results showed that NIM performed 200K counts per seconds and the most isotopes tested were correctly recognized within 1% error range when only a single unknown isotope was used for detection test. The software is expected to be used for radiation monitoring in various applications such as hospitals, power plants, and research facilities etc.

Analysis of Microbial Community Change in Ganjang According to the Size of Meju (메주의 크기에 따른 간장의 미생물 군집 변화 양상 분석)

  • Ho Jin Jeong;Gwangsu Ha;Ranhee Lee;Do-Youn Jeong;Hee-Jong Yang
    • Journal of Life Science
    • /
    • v.34 no.7
    • /
    • pp.453-464
    • /
    • 2024
  • The fermentation of ganjang is known to be greatly influenced by the microbial communities derived from its primary ingredients, meju and sea salt. This study investigated the effects of changes in meju size on the distribution and correlation of microbial communities in ganjang fermentation, to enhance its fermentation process. Ganjang was prepared using whole meju and meju divided into thirds, and samples were collected at 7-day intervals over a period of 28 days for microbial community analysis based on 16S rRNA gene sequencing. At the genus level, during fermentation, ganjang made with whole meju exhibited a dominance of Chromohalobacter (day 7), Pediococcus (day 14), Bacillus (day 21), and Pediococcus (day 28), whereas ganjang made with meju divided into thirds consistently showed a Pediococcus predominance over the 28 days. Beta-diversity analysis of microbial communities in ganjang with different meju sizes revealed significant separation of microbial communities at fermentation days 7 and 14 but not at days 21 and 28 across all experimental groups. The linear discriminant analysis effect size (LEfSe) was determined to identify biomarkers contributing to microbial community differences at days 7 and 14, showing that on day 7, potentially halophilic microbes such as Gammaproteobacteria, Firmicutes, Oceanospirillales, Halomonadaceae, Bacilli, and Chromohalobacter were prominent, whereas on day 14, lactic acid bacteria such as Pediococcus acidilactici, Lactobacillaceae, Pediococcus, Bacilli, Leuconostocaceae, and Weissella were predominant. Furthermore, correlation analysis of microbial communities at the genus and species levels revealed differences in correlation patterns between meju sizes, suggesting that meju size may influence microbial interactions within ganjang.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.