• Title/Summary/Keyword: Gaussian-Like

Search Result 136, Processing Time 0.03 seconds

Speaker Identification in Small Training Data Environment using MLLR Adaptation Method (MLLR 화자적응 기법을 이용한 적은 학습자료 환경의 화자식별)

  • Kim, Se-hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.159-162
    • /
    • 2005
  • Identification is the process automatically identify who is speaking on the basis of information obtained from speech waves. In training phase, each speaker models are trained using each speaker's speech data. GMMs (Gaussian Mixture Models), which have been successfully applied to speaker modeling in text-independent speaker identification, are not efficient in insufficient training data environment. This paper proposes speaker modeling method using MLLR (Maximum Likelihood Linear Regression) method which is used for speaker adaptation in speech recognition. We make SD-like model using MLLR adaptation method instead of speaker dependent model (SD). Proposed system outperforms the GMMs in small training data environment.

  • PDF

Adaptive Iterative Depeckling of SAR Imagery

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.455-464
    • /
    • 2007
  • Lee(2007) suggested the Point-Jacobian iteration MAP estimation(PJIMAP) for noise removal of the images that are corrupted by multiplicative speckle noise. It is to find a MAP estimation of noisy-free imagery based on a Bayesian model using the lognormal distribution for image intensity and an MRF for image texture. When the image intensity is logarithmically transformed, the speckle noise is approximately Gaussian additive noise, and it tends to a normal probability much faster than the intensity distribution. The MRF is incorporated into digital image analysis by viewing pixel types as states of molecules in a lattice-like physical system. In this study, the MAP estimation is computed by the Point-Jacobian iteration using adaptive parameters. At each iteration, the parameters related to the Bayesian model are adaptively estimated using the updated information. The results of the proposed scheme were compared to them of PJIMAP with SAR simulation data generated by the Monte Carlo method. The experiments demonstrated an improvement in relaxing speckle noise and estimating noise-free intensity by using the adaptive parameters for the Ponit-Jacobian iteration.

Optical properties of diamond-like carbon films deposited by ECR-PECVD method (ECR-PECVD 방법으로 증착한 Diamond-Like carbon 박막의 광 특성)

  • Kim, Dae-Nyoun;Kim, Ki-Hong;Kim, Hye-Dong
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.9 no.2
    • /
    • pp.291-299
    • /
    • 2004
  • DLC films were deposited using the ECR-PECVD method with the fixed deposition condition, such as ECR power, methane and hydrogen gas-flow rates and deposition time, for various substrate bias voltage. We have investigated the ion bombardment effect induced by the substrate bias voltage on films during the deposition of film. The characteristic of the films were analyzed using the FTIR, Raman, and UV/Vis spectroscopy analysis shows that the amount of dehydrogenation in films was increased with the increase of substrate bias voltage and films thickness was decreased. Raman scattering analysis shows that integrated intensity ratio(ID/IG) of the D and G peak was increased as the substrate bias voltage increased and films hardness was increased. Optical transmittances of DLC film were decreased with increasing deposition time and substrate bias voltage. From these results, it can be concluded that films deposited at this experimental have the enhanced characteristics of DLC because of the ion bombardment effect on films during the deposition of film.

  • PDF

Depth From Defocus using Wavelet Transform (웨이블릿 변환을 이용한 Depth From Defocus)

  • Choi, Chang-Min;Choi, Tae-Sun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.19-26
    • /
    • 2005
  • In this paper, a new method for obtaining three-dimensional shape of an object by measuring relative blur between images using wavelet analysis has been described. Most of the previous methods use inverse filtering to determine the measure of defocus. These methods suffer from some fundamental problems like inaccuracies in finding the frequency domain representation, windowing effects, and border effects. Besides these deficiencies, a filter, such as Laplacian of Gaussian, that produces an aggregate estimate of defocus for an unknown texture, can not lead to accurate depth estimates because of the non-stationary nature of images. We propose a new depth from defocus (DFD) method using wavelet analysis that is capable of performing both the local analysis and the windowing technique with variable-sized regions for non-stationary images with complex textural properties. We show that normalized image ratio of wavelet power by Parseval's theorem is closely related to blur parameter and depth. Experimental results have been presented demonstrating that our DFD method is faster in speed and gives more precise shape estimates than previous DFD techniques for both synthetic and real scenes.

A Study on Roughness Measurement of Polished Surfaces Using Reflected Laser Beam Image (레이저빔 반사 화상을 이용한 연마면 거칠기 측정법에 관한 연구)

  • Shen, Yun-Feng;Lim, Han-Seok;Kim, Hwa-Young;Ahn , Jung-Hwan
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.2 s.95
    • /
    • pp.145-152
    • /
    • 1999
  • This paper presents the principle and experimental results of a non-contact surface roughness measurement by means of screen projected pattern of lase beam reflected from a polished surface. In the reflected laser beam pattern especially from a fine surface like ground or polished one, light intensity varies from the center fo the image to its boundary as the Gaussian distribution. The standard deviation of a light intensity distribution is assumed to be a good non-contact estimator for measuring the surface roughnes, because the light reflectivity is known to be well related with the surface roughness. This method doesn't need to discriminate between the specularly reflected light and the diffusely reflected one, whereas the scattered laser intensity method must do. Nor it needs to adjust the change of light intensity caused by environmental lights or specimen materials. Reflected laser beam pattern narrowly spreads out in the vertical direction to tiny scratches on the polished surface due to abrasives. The deeper the scratch the more the dispersion, which means the rougher surface. The standard deviation of the pattern is nearly in proportion to the surface roughness. Measurement errors by this method are shown to be below 10 percent compared with those obtained by a common contact method. The inclination of measuring unit from the normal axis causes the measurement errors up to 10 percent for an angle of 4 degree. Therefore the proposed method can be used as an on-the-machine quick roughness estimator within 10 percent measurement error.

  • PDF

Iterative Reduction of Blocking Artifact in Block Transform-Coded Images Using Wavelet Transform (웨이브렛 변환을 이용한 블록기반 변환 부호화 영상에서의 반복적 블록화 현상 제거)

  • 장익훈;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2369-2381
    • /
    • 1999
  • In this paper, we propose an iterative algorithm for reducing the blocking artifact in block transform-coded images by using a wavelet transform. In the proposed method, an image is considered as a set of one-dimensional horizontal and vertical signals and one-dimensional wavelet transform is utilized in which the mother wavelet is the first order derivative of a Gaussian like function. The blocking artifact is reduced by removing the blocking component, that causes the variance at the block boundary position in the first scale wavelet domain to be abnormally higher than those at the other positions, using a minimum mean square error (MMSE) filter in the wavelet domain. This filter minimizes the MSE between the ideal blocking component-free signal and the restored signal in the neighborhood of block boundaries in the wavelet domain. It also uses local variance in the wavelet domain for pixel adaptive processing. The filtering and the projection onto a convex set of quantization constraint are iteratively performed in alternating fashion. Experimental results show that the proposed method yields not only a PSNR improvement of about 0.56-1.07 dB, but also subjective quality nearly free of the blocking artifact and edge blur.

  • PDF

Digital Watermarking using the Channel Coding Technique (채널 코딩 기법을 이용한 디지털 워터마킹)

  • Bae, Chang-Seok;Choi, Jae-Hoon;Seo, Dong-Wan;Choe, Yoon-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3290-3299
    • /
    • 2000
  • Digital watermarking has similar concepts with channel coding thechnique for transferring data with minimizing error in noise environment, since it should be robust to various kinds of data manipulation for protecting copyrights of multimedia data. This paper proposes a digital watermarking technique which is robust to various kinds of data manipulation. Intellectual property rights information is encoded using a convolutional code, and block-interleaving technique is applied to prevent successive loss of encoded data. Encoded intelloctual property rithts informationis embedded using spread spectrum technique which is robust to cata manipulation. In order to reconstruct intellectual property rights information, watermark signalis detected by covariance between watermarked image and pseudo rando noise sequence which is used to einbed watermark. Embedded intellectual property rights information is obtaned by de-interleaving and cecoding previously detected wtermark signal. Experimental results show that block interleaving watermarking technique can detect embedded intellectial property right informationmore correctly against to attacks like Gaussian noise additon, filtering, and JPEG compression than general spread spectrum technique in the same PSNR.

  • PDF

GLOBAL Hɪ PROPERTIES OF GALAXIES VIA SUPER-PROFILE ANALYSIS

  • Kim, Minsu;Oh, Se-Heon
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.5
    • /
    • pp.149-172
    • /
    • 2022
  • We present a new method which constructs an Hɪ super-profile of a galaxy which is based on profile decomposition analysis. The decomposed velocity profiles of an Hɪ data cube with an optimal number of Gaussian components are co-added after being aligned in velocity with respect to their centroid velocities. This is compared to the previous approach where no prior profile decomposition is made for the velocity profiles being stacked. The S/N improved super-profile is useful for deriving the galaxy's global Hɪ properties like velocity dispersion and mass from observations which do not provide sufficient surface brightness sensitivity for the galaxy. As a practical test, we apply our new method to 64 high-resolution Hɪ data cubes of nearby galaxies in the local Universe which are taken from THINGS and LITTLE THINGS. In addition, we also construct two additional Hɪ super-profiles of the sample galaxies using symmetric and all velocity profiles of the cubes whose centroid velocities are determined from Hermite h3 polynomial fitting, respectively. We find that the Hɪ super-profiles constructed using the new method have narrower cores and broader wings in shape than the other two super-profiles. This is mainly due to the effect of either asymmetric velocity profiles' central velocity bias or the removal of asymmetric velocity profiles in the previous methods on the resulting Hɪ super-profiles. We discuss how the shapes (𝜎n/𝜎b, An/Ab, and An/Atot) of the new Hɪ super-profiles which are measured from a double Gaussian fit are correlated with star formation rates of the sample galaxies and are compared with those of the other two super-profiles.

A Hippocampus Segmentation in Brain MR Images using Level-Set Method (레벨 셋 방법을 이용한 뇌 MR 영상에서 해마영역 분할)

  • Lee, Young-Seung;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1075-1085
    • /
    • 2012
  • In clinical research using medical images, the image segmentation is one of the most important processes. Especially, the hippocampal atrophy is helpful for the clinical Alzheimer diagnosis as a specific marker of the progress of Alzheimer. In order to measure hippocampus volume exactly, segmentation of the hippocampus is essential. However, the hippocampus has some features like relatively low contrast, low signal-to-noise ratio, discreted boundary in MRI images, and these features make it difficult to segment hippocampus. To solve this problem, firstly, We selected region of interest from an experiment image, subtracted a original image from the negative image of the original image, enhanced contrast, and applied anisotropic diffusion filtering and gaussian filtering as preprocessing. Finally, We performed an image segmentation using two level set methods. Through a variety of approaches for the validation of proposed hippocampus segmentation method, We confirmed that our proposed method improved the rate and accuracy of the segmentation. Consequently, the proposed method is suitable for segmentation of the area which has similar features with the hippocampus. We believe that our method has great potential if successfully combined with other research findings.

New Scheme for Smoker Detection (흡연자 검출을 위한 새로운 방법)

  • Lee, Jong-seok;Lee, Hyun-jae;Lee, Dong-kyu;Oh, Seoung-jun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.9
    • /
    • pp.1120-1131
    • /
    • 2016
  • In this paper, we propose a smoker recognition algorithm, detecting smokers in a video sequence in order to prevent fire accidents. We use description-based method in hierarchical approaches to recognize smoker's activity, the algorithm consists of background subtraction, object detection, event search, event judgement. Background subtraction generates slow-motion and fast-motion foreground image from input image using Gaussian mixture model with two different learning-rate. Then, it extracts object locations in the slow-motion image using chain-rule based contour detection. For each object, face is detected by using Haar-like feature and smoke is detected by reflecting frequency and direction of smoke in fast-motion foreground. Hand movements are detected by motion estimation. The algorithm examines the features in a certain interval and infers that whether the object is a smoker. It robustly can detect a smoker among different objects while achieving real-time performance.