• Title/Summary/Keyword: non Gaussian

Search Result 509, Processing Time 0.031 seconds

Change detection algorithm based on amplitude statistical distribution for high resolution SAR image (통계분포에 기반한 고해상도 SAR 영상의 변화탐지 알고리즘 구현 및 적용)

  • Lee, Kiwoong;Kang, Seoli;Kim, Ahleum;Song, Kyungmin;Lee, Wookyung
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.227-244
    • /
    • 2015
  • Synthetic Aperture Radar is able to provide images of wide coverage in day, night, and all-weather conditions. Recently, as the SAR image resolution improves up to the sub-meter level, their applications are rapidly expanding accordingly. Especially there is a growing interest in the use of geographic information of high resolution SAR images and the change detection will be one of the most important technique for their applications. In this paper, an automatic threshold tracking and change detection algorithm is proposed applicable to high-resolution SAR images. To detect changes within SAR image, a reference image is generated using log-ratio operator and its amplitude distribution is estimated through K-S test. Assuming SAR image has a non-gaussian amplitude distribution, a generalized thresholding technique is applied using Kittler and Illingworth minimum-error estimation. Also, MoLC parametric estimation method is adopted to improve the algorithm performance on rough ground target. The implemented algorithm is tested and verified on the simulated SAR raw data. Then, it is applied to the spaceborne high-resolution SAR images taken by Cosmo-Skymed and KOMPSAT-5 and the performances are analyzed and compared.

Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection

  • Bajwa, Waheed U.;Calderbank, Robert;Jafarpour, Sina
    • Journal of Communications and Networks
    • /
    • v.12 no.4
    • /
    • pp.289-307
    • /
    • 2010
  • The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence-termed as the worst-case coherence and the average coherence-among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.

Fast Bayesian Inversion of Geophysical Data (지구물리 자료의 고속 베이지안 역산)

  • Oh, Seok-Hoon;Kwon, Byung-Doo;Nam, Jae-Cheol;Kee, Duk-Kee
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.3
    • /
    • pp.161-174
    • /
    • 2000
  • Bayesian inversion is a stable approach to infer the subsurface structure with the limited data from geophysical explorations. In geophysical inverse process, due to the finite and discrete characteristics of field data and modeling process, some uncertainties are inherent and therefore probabilistic approach to the geophysical inversion is required. Bayesian framework provides theoretical base for the confidency and uncertainty analysis for the inference. However, most of the Bayesian inversion require the integration process of high dimension, so massive calculations like a Monte Carlo integration is demanded to solve it. This method, though, seemed suitable to apply to the geophysical problems which have the characteristics of highly non-linearity, we are faced to meet the promptness and convenience in field process. In this study, by the Gaussian approximation for the observed data and a priori information, fast Bayesian inversion scheme is developed and applied to the model problem with electric well logging and dipole-dipole resistivity data. Each covariance matrices are induced by geostatistical method and optimization technique resulted in maximum a posteriori information. Especially a priori information is evaluated by the cross-validation technique. And the uncertainty analysis was performed to interpret the resistivity structure by simulation of a posteriori covariance matrix.

  • PDF

Software development for the visualization of brain fiber tract by using 24-bit color coding in diffusion tensor image

  • Oh, Jung-Su;Song, In-Chan;Ik hwan Cho;Kim, Jong-Hyo;Chang, Kee-Hyun;Park, Kwang-Suk
    • Proceedings of the KSMRM Conference
    • /
    • 2002.11a
    • /
    • pp.133-133
    • /
    • 2002
  • Purpose: The purpose of paper is to implement software to visualize brain fiber tract using a 24-bit color coding scheme and to test its feasibility. Materials and Methods: MR imaging was performed on GE 1.5 T Signa scanner. For diffusion tensor image, we used a single shot spin-echo EPI sequence with 7 non-colinear pulsed-field gradient directions: (x, y, z):(1,1,0),(-1,1,0),(1,0,1),(-1,0,1),(0,1,1),(0,1,-1) and without diffusion gradient. B-factor was 500 sec/$\textrm{mm}^2$. Acquisition parameters are as follows: TUTE=10000ms/99ms, FOV=240mm, matrix=128${\times}$128, slice thickness/gap=6mm/0mm, total slice number=30. Subjects consisted of 10 normal young volunteers (age:21∼26 yrs, 5 men, 5 women). All DTI images were smoothed with Gaussian kernel with the FWHM of 2 pixels. Color coding schemes for visualization of directional information was as follows. HSV(Hue, Saturation, Value) color system is appropriate for assigning RGB(Red, Green, and Blue) value for every different directions because of its volumetric directional expression. Each of HSV are assigned due to (r,$\theta$,${\Phi}$) in spherical coordinate. HSV calculated by this way can be transformed into RGB color system by general HSV to RGB conversion formula. Symmetry schemes: It is natural to code the antipodal direction to be same color(antipodal symmetry). So even with no symmetry scheme, the antipodal symmetry must be included. With no symmetry scheme, we can assign every different colors for every different orientation.(H =${\Phi}$, S=2$\theta$/$\pi$, V=λw, where λw is anisotropy). But that may assign very discontinuous color even between adjacent yokels. On the other hand, Full symmetry or absolute value scheme includes symmetry for 180$^{\circ}$ rotation about xy-plane of color coordinate (rotational symmetry) and for both hemisphere (mirror symmetry). In absolute value scheme, each of RGB value can be expressed as follows. R=λw|Vx|, G=λw|Vy|, B=λw|Vz|, where (Vx, Vy, Vz) is eigenvector corresponding to the largest eigenvalue of diffusion tensor. With applying full symmetry or absolute value scheme, we can get more continuous color coding at the expense of coding same color for symmetric direction. For better visualization of fiber tract directions, Gamma and brightness correction had done. All of these implementations were done on the IDL 5.4 platform.

  • PDF

Investigation of light stimulated mouse brain activation in high magnetic field fMRI using image segmentation methods

  • Kim, Wook;Woo, Sang-Keun;Kang, Joo Hyun;Lim, Sang Moo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.11-18
    • /
    • 2016
  • Magnetic resonance image (MRI) is widely used in brain research field and medical image. Especially, non-invasive brain activation acquired image technique, which is functional magnetic resonance image (fMRI) is used in brain study. In this study, we investigate brain activation occurred by LED light stimulation. For investigate of brain activation in experimental small animal, we used high magnetic field 9.4T MRI. Experimental small animal is Balb/c mouse, method of fMRI is using echo planar image (EPI). EPI method spend more less time than any other MRI method. For this reason, however, EPI data has low contrast. Due to the low contrast, image pre-processing is very hard and inaccuracy. In this study, we planned the study protocol, which is called block design in fMRI research field. The block designed has 8 LED light stimulation session and 8 rest session. All block is consist of 6 EPI images and acquired 1 slice of EPI image is 16 second. During the light session, we occurred LED light stimulation for 1 minutes 36 seconds. During the rest session, we do not occurred light stimulation and remain the light off state for 1 minutes 36 seconds. This session repeat the all over the EPI scan time, so the total spend time of EPI scan has almost 26 minutes. After acquired EPI data, we performed the analysis of this image data. In this study, we analysis of EPI data using statistical parametric map (SPM) software and performed image pre-processing such as realignment, co-registration, normalization, smoothing of EPI data. The pre-processing of fMRI data have to segmented using this software. However this method has 3 different method which is Gaussian nonparametric, warped modulate, and tissue probability map. In this study we performed the this 3 different method and compared how they can change the result of fMRI analysis results. The result of this study show that LED light stimulation was activate superior colliculus region in mouse brain. And the most higher activated value of segmentation method was using tissue probability map. this study may help to improve brain activation study using EPI and SPM analysis.

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.

Measurement of two-dimensional vibration and calibration using the low-cost machine vision camera (저가의 머신 비전 카메라를 이용한 2차원 진동의 측정 및 교정)

  • Kim, Seo Woo;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.2
    • /
    • pp.99-109
    • /
    • 2018
  • The precision of the vibration-sensors, contact or non-contact types, is usually satisfactory for the practical measurement applications, but a sensor is confined to the measurement of a point or a direction. Although the precision and frequency span of the low-cost camera are inferior to these sensors, it has the merits in the cost and in the capability of simultaneous measurement of a large vibrating area. Furthermore, a camera can measure multi-degrees-of-freedom of a vibrating object simultaneously. In this study, the calibration method and the dynamic characteristics of the low-cost machine vision camera as a sensor are studied with a demonstrating example of the two-dimensional vibration of a cantilever beam. The planar image of the camera shot reveals two rectilinear and one rotational motion. The rectilinear vibration motion of a single point is first measured using a camera and the camera is experimentally calibrated by calculating error referencing the LDV (Laser Doppler Vibrometer) measurement. Then, by measuring the motion of multiple points at once, the rotational vibration motion and the whole vibration motion of the cantilever beam are measured. The whole vibration motion of the cantilever beam is analyzed both in time and frequency domain.

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Dynamics of the River Plume (하천수 플룸 퍼짐의 동력학적 연구)

  • Yu, Hong-Sun;Lee, Jun;Shin, Jang-Ryong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.6 no.4
    • /
    • pp.413-420
    • /
    • 1994
  • Dynamics of the river plume is a very complicated non-linear problem with the free boundary changing in time and space. Mixing with the ambient water through the boundary makes the problem more complicated. In this paper we reduced 3-dimensional problem into 1-dimensional one by using the integral analysis method. Basic equations have been integrated over the lateral and vertical variations. For these integrations we adopted the well-established assumption that the flow-axis component of plume velocity and the density difference of the plume with the ambient water have Gaussian distributions in directions which are perpendicular to the flow-axis of the plume. We also used the result of our previous study on the lateral spreading velocity of the plume derived under the same assumption. And entrainment was included as a mixing process. The resultant 1-dimensional equations were solved by Runge-Kutta numerical method. Consequently, comparatively easy method of numerical analysis is presented for the 3-dimensional river plume. The method can also be used for the analysis of the thermal plume of cooling water of power plants.

  • PDF

A Novel Method for Automated Honeycomb Segmentation in HRCT Using Pathology-specific Morphological Analysis (병리특이적 형태분석 기법을 이용한 HRCT 영상에서의 새로운 봉와양폐 자동 분할 방법)

  • Kim, Young Jae;Kim, Tae Yun;Lee, Seung Hyun;Kim, Kwang Gi;Kim, Jong Hyo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.109-114
    • /
    • 2012
  • Honeycombs are dense structures that small cysts, which generally have about 2~10 mm in diameter, are surrounded by the wall of fibrosis. When honeycomb is found in the patients, the incidence of acute exacerbation is generally very high. Thus, the observation and quantitative measurement of honeycomb are considered as a significant marker for clinical diagnosis. In this point of view, we propose an automatic segmentation method using morphological image processing and assessment of the degree of clustering techniques. Firstly, image noises were removed by the Gaussian filtering and then a morphological dilation method was applied to segment lung regions. Secondly, honeycomb cyst candidates were detected through the 8-neighborhood pixel exploration, and then non-cyst regions were removed using the region growing method and wall pattern testing. Lastly, final honeycomb regions were segmented through the extraction of dense regions which are consisted of two or more cysts using cluster analysis. The proposed method applied to 80 High resolution computed tomography (HRCT) images and achieved a sensitivity of 89.4% and PPV (Positive Predictive Value) of 72.2%.