• Title/Summary/Keyword: Feature Recognition

Search Result 2,572, Processing Time 0.022 seconds

A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter (Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구)

  • Lee, Jun-Hwan;Lee, Sang-Beom
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

A study on automatic wear debris recognition by using particle feature extraction (입자 유형별 형상추출에 의한 마모입자 자동인식에 관한 연구)

  • ;;;Grigoriev, A.Y.
    • Proceedings of the Korean Society of Tribologists and Lubrication Engineers Conference
    • /
    • 1998.04a
    • /
    • pp.314-320
    • /
    • 1998
  • Wear debris morphology is closely related to the wear mode and mechanism occured. Image recognition of wear debris is, therefore, a powerful tool in wear monitoring. But it has usually required expert's experience and the results could be too subjective. Development of automatic tools for wear debris recognition is needed to solve this problem. In this work, an algorithm for automatic wear debris recognition was suggested and implemented by PC base software. The presented method defined a characteristic 3-dimensional feature space where typical types of wear debris were separately located by the knowledge-based system and compared the similarity of object wear debris concerned. The 3-dimensional feature space was obtained from multiple feature vectors by using a multi-dimensional scaling technique. The results showed that the presented automatic wear debris recognition was satisfactory in many cases application.

  • PDF

LSG;(Local Surface Group); A Generalized Local Feature Structure for Model-Based 3D Object Recognition (LSG:모델 기반 3차원 물체 인식을 위한 정형화된 국부적인 특징 구조)

  • Lee, Jun-Ho
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.573-578
    • /
    • 2001
  • This research proposes a generalized local feature structure named "LSG(Local Surface Group) for model-based 3D object recognition". An LSG consists of a surface and its immediately adjacent surface that are simultaneously visible for a given viewpoint. That is, LSG is not a simple feature but a viewpoint-dependent feature structure that contains several attributes such as surface type. color, area, radius, and simultaneously adjacent surface. In addition, we have developed a new method based on Bayesian theory that computes a measure of how distinct an LSG is compared to other LSGs for the purpose of object recognition. We have experimented the proposed methods on an object databaed composed of twenty 3d object. The experimental results show that LSG and the Bayesian computing method can be successfully employed to achieve rapid 3D object recognition.

  • PDF

A Study on Automatic wear Debris Recognition by using Particle Feature Extraction (입자 유형별 형상추출에 의한 마모입자 자동인식에 관한 연구)

  • ;;;A. Y. Grigoriev
    • Tribology and Lubricants
    • /
    • v.15 no.2
    • /
    • pp.206-211
    • /
    • 1999
  • Wear debris morphology is closely related to the wear mode and mechanism occured. Image recognition of wear debris is, therefore, a powerful tool in wear monitoring. But it has usually required expert's experience and the results could be too subjective. Development of automatic tools for wear debris recognition is needed to solve this problem. In this work, an algorithm for automatic wear debris recognition was suggested and implemented by PC base software. The presented method defined a characteristic 3-dimensional feature space where typical types of wear debris were separately located by the knowledge-based system and compared the similarity of object wear debris concerned. The 3-dimensional feature space was obtained from multiple feature vectors by using a multi-dimensional scaling technique. The results showed that the presented automatic wear debris recognition was satisfactory in many cases application.

Patterns Recognition Using Translation-Invariant Wavelet Transform (위치이동에 무관한 웨이블릿 변환을 이용한 패턴인식)

  • Kim, Kuk-Jin;Cho, Seong-Won;Kim, Jae-Min;Lim, Cheol-Su
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.281-286
    • /
    • 2003
  • Wavelet Transform can effectively represent the local characteristics of a signal in the space-frequency domain. However, the feature vector extracted using wavelet transform is not translation invariant. This paper describes a new feature extraction method using wavelet transform, which is translation-invariant. Based on this translation-invariant feature extraction, the iris recognition method, based on this feature extraction method, is robust to noises. Experimentally, we show that the proposed method produces super performance in iris recognition.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

HMM-based Speech Recognition using DMS Model and Double Spectral Feature (DMS 모델과 이중 스펙트럼 특징을 이용한 HMM에 의한 음성 인식)

  • Ann Tae-Ock
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.4
    • /
    • pp.649-655
    • /
    • 2006
  • This paper proposes a HMM-based recognition method using DMSVQ(Dynamic Multi-Section Vector Quantization) codebook by DMS model and double spectral feature, as a method on the speech recognition of speaker-independent. LPC cepstrum parameter is used as a instantaneous spectral feature and LPC cepstrum's regression coefficient is used as a dynamic spectral feature These two spectral features are quantized as each VQ codebook. HMM using DMS model is modeled by receiving instantaneous spectral feature and dynamic spectral feature by input. Other experiments to compare with the results of recognition experiments using proposed method are implemented by the various conventional recognition methods under the equivalent environment of data and conditions. Through the experiment results, it is proved that the proposed method in this paper is superior to the conventional recognition methods.

  • PDF

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.