• 제목/요약/키워드: 특징행렬

Search Result 327, Processing Time 0.03 seconds

Counterfeit Money Detection Algorithm using Non-Local Mean Value and Support Vector Machine Classifier (비지역적 특징값과 서포트 벡터 머신 분류기를 이용한 위변조 지폐 판별 알고리즘)

  • Ji, Sang-Keun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.55-64
    • /
    • 2013
  • Due to the popularization of digital high-performance capturing equipments and the emergence of powerful image-editing softwares, it is easy for anyone to make a high-quality counterfeit money. However, the probability of detecting a counterfeit money to the general public is extremely low. In this paper, we propose a counterfeit money detection algorithm using a general purpose scanner. This algorithm determines counterfeit money based on the different features in the printing process. After the non-local mean value is used to analyze the noises from each money, we extract statistical features from these noises by calculating a gray level co-occurrence matrix. Then, these features are applied to train and test the support vector machine classifier for identifying either original or counterfeit money. In the experiment, we use total 324 images of original money and counterfeit money. Also, we compare with noise features from previous researches using wiener filter and discrete wavelet transform. The accuracy of the algorithm for identifying counterfeit money was over 94%. Also, the accuracy for identifying the printing source was over 93%. The presented algorithm performs better than previous researches.

On-line Nonlinear Principal Component Analysis for Nonlinear Feature Extraction (비선형 특징 추출을 위한 온라인 비선형 주성분분석 기법)

  • 김병주;심주용;황창하;김일곤
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.361-368
    • /
    • 2004
  • The purpose of this study is to propose a new on-line nonlinear PCA(OL-NPCA) method for a nonlinear feature extraction from the incremental data. Kernel PCA(KPCA) is widely used for nonlinear feature extraction, however, it has been pointed out that KPCA has the following problems. First, applying KPCA to N patterns requires storing and finding the eigenvectors of a N${\times}$N kernel matrix, which is infeasible for a large number of data N. Second problem is that in order to update the eigenvectors with an another data, the whole eigenspace should be recomputed. OL-NPCA overcomes these problems by incremental eigenspace update method with a feature mapping function. According to the experimental results, which comes from applying OL-NPCA to a toy and a large data problem, OL-NPCA shows following advantages. First, OL-NPCA is more efficient in memory requirement than KPCA. Second advantage is that OL-NPCA is comparable in performance to KPCA. Furthermore, performance of OL-NPCA can be easily improved by re-learning the data.

Block Classification of Document Images by Block Attributes and Texture Features (블록의 속성과 질감특징을 이용한 문서영상의 블록분류)

  • Jang, Young-Nae;Kim, Joong-Soo;Lee, Cheol-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.7
    • /
    • pp.856-868
    • /
    • 2007
  • We propose an effective method for block classification in a document image. The gray level document image is converted to the binary image for a block segmentation. This binary image would be smoothed to find the locations and sizes of each block. And especially during this smoothing, the inner block heights of each block are obtained. The gray level image is divided to several blocks by these location informations. The SGLDM(spatial gray level dependence matrices) are made using the each gray-level document block and the seven second-order statistical texture features are extracted from the (0,1) direction's SGLDM which include the document attributes. Document image blocks are classified to two groups, text and non-text group, by the inner block height of the block at the nearest neighbor rule. The seven texture features(that were extracted from the SGLDM) are used for the five detail categories of small font, large font, table, graphic and photo blocks. These document blocks are available not only for structure analysis of document recognition but also the various applied area.

  • PDF

Feature Extraction and Classification of High Dimensional Biomedical Spectral Data (고차원을 갖는 생체 스펙트럼 데이터의 특징추출 및 분류기법)

  • Cho, Jae-Hoon;Park, Jin-Il;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.3
    • /
    • pp.297-303
    • /
    • 2009
  • In this paper, we propose the biomedical spectral pattern classification techniques by the fusion scheme based on the SpPCA and MLP in extended feature space. A conventional PCA technique for the dimension reduction has the problem that it can't find an optimal transformation matrix if the property of input data is nonlinear. To overcome this drawback, we extract features by the SpPCA technique in extended space which use the local patterns rather than whole patterns. In the classification step, individual classifier based on MLP calculates the similarity of each class for local features. Finally, biomedical spectral patterns is classified by the fusion scheme to effectively combine the individual information. As the simulation results to verify the effectiveness, the proposed method showed more improved classification results than conventional methods.

Gait-based Human Identification System using Eigenfeature Regularization and Extraction (고유특징 정규화 및 추출 기법을 이용한 걸음걸이 바이오 정보 기반 사용자 인식 시스템)

  • Lee, Byung-Yun;Hong, Sung-Jun;Lee, Hee-Sung;Kim, Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.6-11
    • /
    • 2011
  • In this paper, we propose a gait-based human identification system using eigenfeature regularization and extraction (ERE). First, a gait feature for human identification which is called gait energy image (GEI) is generated from walking sequences acquired from a camera sensor. In training phase, regularized transformation matrix is obtained by applying ERE to the gallery GEI dataset, and the gallery GEI dataset is projected onto the eigenspace to obtain galley features. In testing phase, the probe GEI dataset is projected onto the eigenspace created in training phase and determine the identity by using a nearest neighbor classifier. Experiments are carried out on the CASIA gait dataset A to evaluate the performance of the proposed system. Experimental results show that the proposed system is better than previous works in terms of correct classification rate.

Real-time 3D Feature Extraction Combined with 3D Reconstruction (3차원 물체 재구성 과정이 통합된 실시간 3차원 특징값 추출 방법)

  • Hong, Kwang-Jin;Lee, Chul-Han;Jung, Kee-Chul;Oh, Kyoung-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.789-799
    • /
    • 2008
  • For the communication between human and computer in an interactive computing environment, the gesture recognition has been studied vigorously. The algorithms which use the 2D features for the feature extraction and the feature comparison are faster, but there are some environmental limitations for the accurate recognition. The algorithms which use the 2.5D features provide higher accuracy than 2D features, but these are influenced by rotation of objects. And the algorithms which use the 3D features are slow for the recognition, because these algorithms need the 3d object reconstruction as the preprocessing for the feature extraction. In this paper, we propose a method to extract the 3D features combined with the 3D object reconstruction in real-time. This method generates three kinds of 3D projection maps using the modified GPU-based visual hull generation algorithm. This process only executes data generation parts only for the gesture recognition and calculates the Hu-moment which is corresponding to each projection map. In the section of experimental results, we compare the computational time of the proposed method with the previous methods. And the result shows that the proposed method can apply to real time gesture recognition environment.

A Hybrid Collaborative Filtering Using a Low-dimensional Linear Model (저차원 선형 모델을 이용한 하이브리드 협력적 여과)

  • Ko, Su-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.777-785
    • /
    • 2009
  • Collaborative filtering is a technique used to predict whether a particular user will like a particular item. User-based or item-based collaborative techniques have been used extensively in many commercial recommender systems. In this paper, a hybrid collaborative filtering method that combines user-based and item-based methods using a low-dimensional linear model is proposed. The proposed method solves the problems of sparsity and a large database by using NMF among the low-dimensional linear models. In collaborative filtering systems the methods using the NMF are useful in expressing users as semantic relations. However, they are model-based methods and the process of computation is complex, so they can not recommend items dynamically. In order to complement the shortcomings, the proposed method clusters users into groups by using NMF and selects features of groups by using TF-IDF. Mutual information is then used to compute similarities between items. The proposed method clusters users into groups and extracts features of groups on offline and determines the most suitable group for an active user using the features of groups on online. Finally, the proposed method reduces the time required to classify an active user into a group and outperforms previous methods by combining user-based and item-based collaborative filtering methods.

Robust Face Alignment using Progressive AAM (점진적 AAM을 이용한 강인한 얼굴 윤곽 검출)

  • Kim, Dae-Hwan;Kim, Jae-Min;Cho, Seong-Won;Jang, Yong-Suk;Kim, Boo-Gyoun;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.11-20
    • /
    • 2007
  • AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In this paper, we propose a face alignment method using progressive AAM. The proposed method consists of two stages; modelling and relation derivation stage and fitting stage. Modelling and relation derivation stage first builds two AAM models; the inner face AAM model and the whole face AAM model and then derive the relation matrix between the inner face AAM model parameter vector and the whole face AAM model parameter vector. The fitting stage is processed progressively in two phases. In the first phase, the proposed method finds the feature parameters for the inner facial feature points of a new face, and then in the second phase it localizes the whole facial feature points of the new face using the initial values estimated utilizing the inner feature parameters obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment method is more robust with respect to pose, and face background than the conventional basic AAM-based face alignment.

Joint Diversion Analysis Using the Dispersion Characteristics of Love Wave and Rayleigh Wave (I) - Constitution of Joint Diversion Analysis Technique - (러브파와 레일리파의 분산특성을 이용한 동시역산해석(I) - 동시역산해석기법의 구성 -)

  • Lee Il-Wha;Joh Sung-Ho
    • Journal of the Korean Geotechnical Society
    • /
    • v.21 no.4
    • /
    • pp.145-154
    • /
    • 2005
  • Love wave and Rayleigh wave are the major elastic waves belonging to the category of the surface wave. Those waves are used to determine the ground stiffness profile using their dispersion characteristics. The fact that Love wave is not contaminated by P-wave makes Love wave superior to Rayleigh wave and other body waves. Therefore, the information that Love wave carries is more distinct and clearer than that of others. Based on theoretical research, the joint inversion analysis that uses the dispersion information of both Love and Rayleigh wave was proposed. This analysis consists of the forward modeling using transfer matrix, the sensitivity matrix for evaluating the ground system and DLSS (Damped Least Square Solution) as an inversion technique. The technique of joint inversion uses the dispersion characteristics of Love wave and Rayleigh wave simultaneously making the sensitivity matrix. The sensitivity matrix was used for inversion analysis repeatedly to find the approximate ground stiffness profile. The purpose of the joint inversion analysis is to improve accuracy and convergency of inversion results by utilizing that frequency contribution of each wave is different.

On Optimizing Dissimilarity-Based Classifications Using a DTW and Fusion Strategies (DTW와 퓨전기법을 이용한 비유사도 기반 분류법의 최적화)

  • Kim, Sang-Woon;Kim, Seung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.2
    • /
    • pp.21-28
    • /
    • 2010
  • This paper reports an experimental result on optimizing dissimilarity-based classification(DBC) by simultaneously using a dynamic time warping(DTW) and a multiple fusion strategy(MFS). DBC is a way of defining classifiers among classes; they are not based on the feature measurements of individual samples, but rather on a suitable dissimilarity measure among the samples. In DTW, the dissimilarity is measured in two steps: first, we adjust the object samples by finding the best warping path with a correlation coefficient-based DTW technique. We then compute the dissimilarity distance between the adjusted objects with conventional measures. In MFS, fusion strategies are repeatedly used in generating dissimilarity matrices as well as in designing classifiers: we first combine the dissimilarity matrices obtained with the DTW technique to a new matrix. After training some base classifiers in the new matrix, we again combine the results of the base classifiers. Our experimental results for well-known benchmark databases demonstrate that the proposed mechanism achieves further improved results in terms of classification accuracy compared with the previous approaches. From this consideration, the method could also be applied to other high-dimensional tasks, such as multimedia information retrieval.