• Title/Summary/Keyword: Feature vectors

Search Result 814, Processing Time 0.024 seconds

Neural Network Recognition of Scanning Electron Microscope Image for Plasma Diagnosis (플라즈마 진단을 위한 Scanning Electron Microscope Image의 신경망 인식 모델)

  • Ko, Woo-Ram;Kim, Byung-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.132-134
    • /
    • 2006
  • To improve equipment throughput and device yield, a malfunction in plasma equipment should be accurately diagnosed. A recognition model for plasma diagnosis was constructed by applying neural network to scanning electron microscope (SEM) image of plasma-etched patterns. The experimental data were collected from a plasma etching of tungsten thin films. Faults in plasma were generated by simulating a variation in process parameters. Feature vectors were obtained by applying direct and wavelet techniques to SEM Images. The wavelet techniques generated three feature vectors composed of detailed components. The diagnosis models constructed were evaluated in terms of the recognition accuracy. The direct technique yielded much smaller recognition accuracy with respect to the wavelet technique. The improvement was about 82%. This demonstrates that the direct method is more effective in constructing a neural network model of SEM profile information.

  • PDF

Used Bank Note Classification System (오손 지페 분류 시스템)

  • 이준재;도경훈
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.9
    • /
    • pp.73-80
    • /
    • 1998
  • In this paper, a used bank note classification system for banking facilities is presented. The proposed system first models the process for note to change from new to old one and selects and displaces some sensing devices for getting its characteristics. Second, it extracts four feature vectors from sensing data, transforms them into principal components analysis, then maps the feature vectors to eigenvectors corresponding to maximum among eigenvalues. A note is classified new or old by the threshold set by user. The experimental result shows that the proposed system has a speed of eight notes per second and classification rate of 96 %.

  • PDF

Speaker Identification Using PCA Fuzzy Mixture Model (PCA 퍼지 혼합 모델을 이용한 화자 식별)

  • Lee, Ki-Yong
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.149-157
    • /
    • 2003
  • In this paper, we proposed the principal component analysis (PCA) fuzzy mixture model for speaker identification. A PCA fuzzy mixture model is derived from the combination of the PCA and the fuzzy version of mixture model with diagonal covariance matrices. In this method, the feature vectors are first transformed by each speaker's PCA transformation matrix to reduce the correlation among the elements. Then, the fuzzy mixture model for speaker is obtained from these transformed feature vectors with reduced dimensions. The orthogonal Gaussian Mixture Model (GMM) can be derived as a special case of PCA fuzzy mixture model. In our experiments, with having the number of mixtures equal, the proposed method requires less training time and less storage as well as shows better speaker identification rate compared to the conventional GMM. Also, the proposed one shows equal or better identification performance than the orthogonal GMM does.

  • PDF

Image Forgery Detection Using Gabor Filter (가보 필터를 이용한 이미지 위조 검출 기법)

  • NININAHAZWE, Sheilha;Rhee, Kyung-Hyune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.520-522
    • /
    • 2014
  • Due to the availability of easy-to-use and powerful image editing tools, the authentication of digital images cannot be taken for granted and it gives rise to non-intrusive forgery detection problem because all imaging devices do not embed watermark. Forgery detection plays an important role in this case. In this paper, an effective framework for passive-blind method for copy-move image forgery detection is proposed, based on Gabor filter which is robust to illumination, rotation invariant, robust to scale. For the detection, the suspicious image is selected and Gabor wavelet is applied from whole scale space and whole direction space. We will extract the mean and the standard deviation as the texture features and feature vectors. Finally, a distance is calculated between two textures feature vectors to determine the forgery, and the decision will be made based on that result.

Blur-Invariant Feature Descriptor Using Multidirectional Integral Projection

  • Lee, Man Hee;Park, In Kyu
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.502-509
    • /
    • 2016
  • Feature detection and description are key ingredients of common image processing and computer vision applications. Most existing algorithms focus on robust feature matching under challenging conditions, such as inplane rotations and scale changes. Consequently, they usually fail when the scene is blurred by camera shake or an object's motion. To solve this problem, we propose a new feature description algorithm that is robust to image blur and significantly improves the feature matching performance. The proposed algorithm builds a feature descriptor by considering the integral projection along four angular directions ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, and $135^{\circ}$) and by combining four projection vectors into a single highdimensional vector. Intensive experiment shows that the proposed descriptor outperforms existing descriptors for different types of blur caused by linear motion, nonlinear motion, and defocus. Furthermore, the proposed descriptor is robust to intensity changes and image rotation.

Feature Extraction by Optimizing the Cepstral Resolution of Frequency Sub-bands (주파수 부대역의 켑스트럼 해상도 최적화에 의한 특징추출)

  • 지상문;조훈영;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.35-41
    • /
    • 2003
  • Feature vectors for conventional speech recognition are usually extracted in full frequency band. Therefore, each sub-band contributes equally to final speech recognition results. In this paper, feature Teeters are extracted indepedently in each sub-band. The cepstral resolution of each sub-band feature is controlled for the optimal speech recognition. For this purpose, different dimension of each sub-band ceptral vectors are extracted based on the multi-band approach, which extracts feature vector independently for each sub-band. Speech recognition rates and clustering quality are suggested as the criteria for finding the optimal combination of sub-band Teeter dimension. In the connected digit recognition experiments using TIDIGITS database, the proposed method gave string accuracy of 99.125%, 99.775% percent correct, and 99.705% percent accuracy, which is 38%, 32% and 37% error rate reduction relative to baseline full-band feature vector, respectively.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

A Vehicle License Plate Recognition Using the Haar-like Feature and CLNF Algorithm (Haar-like Feature 및 CLNF 알고리즘을 이용한 차량 번호판 인식)

  • Park, SeungHyun;Cho, Seongwon
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.15-23
    • /
    • 2016
  • This paper proposes an effective algorithm of Korean license plate recognition. By applying Haar-like feature and Canny edge detection on a captured vehicle image, it is possible to find a connected rectangular, which is a strong candidate for license plate. The color information of license plate separates plates into white and green. Then, OTSU binary image processing and foreground neighbor pixel propagation algorithm CLNF will be applied to each license plates to reduce noise except numbers and letters. Finally, through labeling, numbers and letters will be extracted from the license plate. Letter and number regions, separated from the plate, pass through mesh method and thinning process for extracting feature vectors by X-Y projection method. The extracted feature vectors are classified using neural networks trained by backpropagation algorithm to execute final recognition process. The experiment results show that the proposed license plate recognition algorithm works effectively.

Effective Combination of Temporal Information and Linear Transformation of Feature Vector in Speaker Verification (화자확인에서 특징벡터의 순시 정보와 선형 변환의 효과적인 적용)

  • Seo, Chang-Woo;Zhao, Mei-Hua;Lim, Young-Hwan;Jeon, Sung-Chae
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.127-132
    • /
    • 2009
  • The feature vectors which are used in conventional speaker recognition (SR) systems may have many correlations between their neighbors. To improve the performance of the SR, many researchers adopted linear transformation method like principal component analysis (PCA). In general, the linear transformation of the feature vectors is based on concatenated form of the static features and their dynamic features. However, the linear transformation which based on both the static features and their dynamic features is more complex than that based on the static features alone due to the high order of the features. To overcome these problems, we propose an efficient method that applies linear transformation and temporal information of the features to reduce complexity and improve the performance in speaker verification (SV). The proposed method first performs a linear transformation by PCA coefficients. The delta parameters for temporal information are then obtained from the transformed features. The proposed method only requires 1/4 in the size of the covariance matrix compared with adding the static and their dynamic features for PCA coefficients. Also, the delta parameters are extracted from the linearly transformed features after the reduction of dimension in the static features. Compared with the PCA and conventional methods in terms of equal error rate (EER) in SV, the proposed method shows better performance while requiring less storage space and complexity.

  • PDF

Face Recognition System Based on the Embedded LINUX (임베디드 리눅스 기반의 눈 영역 비교법을 이용한 얼굴인식)

  • Bae, Eun-Dae;Kim, Seok-Min;Nam, Boo-Hee
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.120-121
    • /
    • 2006
  • In this paper, We have designed a face recognition system based on the embedded Linux. This paper has an aim in embedded system to recognize the face more exactly. At first, the contrast of the face image is adjusted with lightening compensation method, the skin and lip color is founded based on YCbCr values from the compensated image. To take advantage of the method based on feature and appearance, these methods are applied to the eyes which has the most highly recognition rate of all the part of the human face. For eyes detecting, which is the most important component of the face recognition, we calculate the horizontal gradient of the face image and the maximum value. This part of the face is resized for fitting the eye image. The image, which is resized for fit to the eye image stored to be compared, is extracted to be the feature vectors using the continuous wavelet transform and these vectors are decided to be whether the same person or not with PNN, to miminize the error rate, the accuracy is analyzed due to the rotation or movement of the face. Also last part of this paper we represent many cases to prove the algorithm contains the feature vector extraction and accuracy of the comparison method.

  • PDF