• Title/Summary/Keyword: Gaussian distribution model

Search Result 352, Processing Time 0.022 seconds

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

A Feasibility study on the Simplified Two Source Model for Relative Electron Output Factor of Irregular Block Shape (단순화 이선원 모델을 이용한 전자선 선량율 계산 알고리듬에 관한 예비적 연구)

  • 고영은;이병용;조병철;안승도;김종훈;이상욱;최은경
    • Progress in Medical Physics
    • /
    • v.13 no.1
    • /
    • pp.21-26
    • /
    • 2002
  • A practical calculation algorithm which calculates the relative output factor(ROF) for irregular shaped electron field has been developed and evaluated the accuracy of the algorithm. The algorithm adapted two-source model, which assumes that the electron dose can be express as sum of the primary source component and the scattered component from the shielding block. Original two-source model has been modified in order to make the algorithm simpler and to reduce the number of parameters needed in the calculation, while the calculation error remains within clinical tolerance range. The primary source is assumed to have Gaussian distribution, while the scattered component follows the inverse square law. Depth and angular dependency of the primary and the scattered are ignored ROF can be calculated with three parameters such as, the effective source distance, the variance of primary source, and the scattering power of the block. The coefficients are obtained from the square shaped-block measurements and the algorithm is confirmed from the rectangular or irregular shaped-fields used in the clinic. The results showed less than 1.0 % difference between the calculation and measurements for most cases. None of cases which have bigger than 2.1 % have been found. By improving the algorithm for the aperture region which shows the largest error, the algorithm could be practically used in the clinic, since one can acquire the 1011 parameter's with minimum measurements(5∼6 measurements per cones) and generates accurate results within the clinically acceptable range.

  • PDF

Algorithms for Indexing and Integrating MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 인덱싱 및 결합 알고리즘)

  • Song, Chi-Ill;Nang, Jong-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • This paper proposes a new indexing mechanism for MPEG-7 visual descriptors, especially Dominant Color and Contour Shape descriptors, that guarantees an efficient similarity search for the multimedia database whose visual meta-data are represented with MPEG-7. Since the similarity metric used in the Dominant Color descriptor is based on Gaussian mixture model, the descriptor itself could be transform into a color histogram in which the distribution of the color values follows the Gauss distribution. Then, the transformed Dominant Color descriptor (i.e., the color histogram) is indexed in the proposed indexing mechanism. For the indexing of Contour Shape descriptor, we have used a two-pass algorithm. That is, in the first pass, since the similarity of two shapes could be roughly measured with the global parameters such as eccentricity and circularity used in Contour shape descriptor, the dissimilar image objects could be excluded with these global parameters first. Then, the similarities between the query and remaining image objects are measured with the peak parameters of Contour Shape descriptor. This two-pass approach helps to reduce the computational resources to measure the similarity of image objects using Contour Shape descriptor. This paper also proposes two integration schemes of visual descriptors for an efficient retrieval of multimedia database. The one is to use the weight of descriptor as a yardstick to determine the number of selected similar image objects with respect to that descriptor, and the other is to use the weight as the degree of importance of the descriptor in the global similarity measurement. Experimental results show that the proposed indexing and integration schemes produce a remarkable speed-up comparing to the exact similarity search, although there are some losses in the accuracy because of the approximated computation in indexing. The proposed schemes could be used to build a multimedia database represented in MPEG-7 that guarantees an efficient retrieval.

Error Analysis of Waterline-based DEM in Tidal Flats and Probabilistic Flood Vulnerability Assessment using Geostatistical Simulation (지구통계학적 시뮬레이션을 이용한 수륙경계선 기반 간석지 DEM의 오차 분석 및 확률론적 침수 취약성 추정)

  • KIM, Yeseul;PARK, No-Wook;JANG, Dong-Ho;YOO, Hee Young
    • Journal of The Geomorphological Association of Korea
    • /
    • v.20 no.4
    • /
    • pp.85-99
    • /
    • 2013
  • The objective of this paper is to analyze the spatial distribution of errors in the DEM generated using waterlines from multi-temporal remote sensing data and to assess flood vulnerability. Unlike conventional research in which only global statistics of errors have been generated, this paper tries to quantitatively analyze the spatial distribution of errors from a probabilistic viewpoint using geostatistical simulation. The initial DEM in Baramarae tidal flats was generated by corrected tidal level values and waterlines extracted from multi-temporal Landsat data in 2010s. When compared with the ground measurement height data, overall the waterline-based DEM underestimated the actual heights and local variations of the errors were observed. By applying sequential Gaussian simulation based on spatial autocorrelation of DEM errors, multiple alternative error distributions were generated. After correcting errors in the initial DEM with simulated error distributions, probabilities for flood vulnerability were estimated under the sea level rise scenarios of IPCC SERS. The error analysis methodology based on geostatistical simulation could model both uncertainties of the error assessment and error propagation problems in a probabilistic framework. Therefore, it is expected that the error analysis methodology applied in this paper will be effectively used for the probabilistic assessment of errors included in various thematic maps as well as the error assessment of waterline-based DEMs in tidal flats.

Comparative Evaluation of Behavior Analysis of Rectangular Jet and Two-dimensional Jet (사각형제트와 2차원제트의 거동해석의 비교 평가)

  • Kwon, Seok Jae;Cho, Hong Yeon;Seo, Il Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6B
    • /
    • pp.641-649
    • /
    • 2006
  • The behavior of a three-dimensional pure rectangular water jet with aspect ratio of 10 was experimentally investigated based on the results of the mean velocity field obtained by PIV. The saddle back distribution was observed in the lateral distribution along the major axis. The theoretical centerline velocity equation derived from the point source concept using the spreading rate for the axisymmetric jet was in good agreement with the measured centerline velocity and gave the division of the potential core region, two-dimensional region, and axisymmetric region. The range of the two-dimensional region divided by the criterion of the theoretical centerline velocity decay for the aspect ratio of 10 was observed to be smaller than that of the transition region. The applicability of the two-dimensional model to the behavior of the rectangular jet with low aspect ratio or the wastewater discharged from a multiport diffuser in the deep water of real ocean may result in significant error in the transition and axisymmetric regions after the two-dimensional region. In the two-dimensional region, the Gaussian constant tended to be conserved, and the spreading rate slightly decreased at the end of the two-dimensional region. The normalized turbulent intensity along the centerline of the jet initially abruptly increased and showed relatively higher intensity for higher Reynolds number.

Common Spectrum Assignment for low power Devices for Wireless Audio Microphone (WPAN용 디지털 음향기기 및 통신기기간 스펙트럼 상호운용을 위한 채널 할당기술에 관한 연구)

  • Kim, Seong-Kweon;Cha, Jae-Sang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.724-729
    • /
    • 2008
  • This paper presents the calculation of the required bandwidth of common frequency bandwidth applying queueing theory for maximizing the efficiency of frequency resource of WPAN(Wireless Personal Area Network) based Digital acoustic and communication devices. It assumed that LBT device(ZigBee) and FH devices (DCP, RFID and Bluetooth) coexist in the common frequency band for WPAN based Digital acoustic and communication devices. Frequency hopping (FH) and listen before talk (LBT) have been used for interference avoidance in the short range device (SRD). The LBT system transmits data after searching for usable frequency bandwidth in the radio wave environment. However, the FH system transmits data without searching for usable frequency bandwidth. The queuing theory is employed to model the FH and LBT system, respectively. As a result, the throughput for each channel was analyzed by processing the usage frequency and the interval of service time for each channel statistically. When common frequency bandwidth is shared with SRD using 250mW, it was known that about 35 channels were required at the condition of throughput 84%, which was determined with the input condition of Gaussian distribution implying safety communication. Therefore, the common frequency bandwidth is estimated with multiplying the number of channel by the bandwidth per channel. These methodology will be useful for the efficient usage of frequency bandwidth.

New Illumination compensation algorithm improving a multi-view video coding performance by advancing its temporal and inter-view correlation (다시점 비디오의 시공간적 중복도를 높여 부호화 성능을 향상시키는 새로운 조명 불일치 보상 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.768-782
    • /
    • 2010
  • Because of the different shooting position between multi-view cameras and the imperfect camera calibration, Illumination mismatches of multi-view video can happen. This variation can bring about the performance decrease of multi-view video coding(MVC) algorithm. A histogram matching algorithm can be applied to recompensate these inconsistencies in a prefiltering step. Once all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching, the coding efficiency of MVC is improved. However the histogram distribution can be different not only between neighboring views but also between sequential views on account of movements of camera angle and some objects, especially human. Therefore the histogram matching algorithm which references all frames in chose view is not appropriate for compensating the illumination differences of these sequence. Thus we propose new algorithms both the image classification algorithm which is applied two criteria to improve the correlation between inter-view frames and the histogram matching which references and matches with a group of pictures(GOP) as a unit to advance the correlation between successive frames. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with the conventional algorithms.

Coupled Finite Element Analysis of Partially Saturated Soil Slope Stability (유한요소 연계해석을 이용한 불포화 토사사면 안전성 평가)

  • Kim, Jae-Hong;Lim, Jae-Seong;Park, Seong-Wan
    • Journal of the Korean Geotechnical Society
    • /
    • v.30 no.4
    • /
    • pp.35-45
    • /
    • 2014
  • Limit equilibrium methods of slope stability analysis have been widely adopted mainly due to their simplicity and applicability. However, the conventional methods may not give reliable and convincing results for various geological conditions such as nonhomogeneous and anisotropic soils. Also, they do not take into account soil slope history nor the initial state of stress, for example excavation or fill placement. In contrast to the limit equilibrium analysis, the analysis of deformation and stress distribution by finite element method can deal with the complex loading sequence and the growth of inelastic zone with time. This paper proposes a technique to determine the critical slip surface as well as to calculate the factor of safety for shallow failure on partially saturated soil slope. Based on the effective stress field in finite element analysis, all stresses are estimated at each Gaussian point of elements. The search strategy for a noncircular critical slip surface along weak points is appropriate for rainfall-induced shallow slope failure. The change of unit weight by seepage force has an effect on the horizontal and vertical displacements on the soil slope. The Drucker-Prager failure criterion was adopted for stress-strain relation to calculate coupling hydraulic and mechanical behavior of the partially saturated soil slope.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

Functional Brain Mapping Using $H_2^{15}O$ Positron Emission Tomography ( I ): Statistical Parametric Mapping Method ($H_2^{15}O$ 양전자단층촬영술을 이용한 뇌기능 지도 작성(I): 통계적 파라메터 지도작성법)

  • Lee, Dong-Soo;Lee, Jae-Sung;Kim, Kyeong-Min;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.3
    • /
    • pp.225-237
    • /
    • 1998
  • Purpose: We investigated the statistical methods to compose the functional brain map of human working memory and the principal factors that have an effect on the methods for localization. Materials and Methods: Repeated PET scans with successive four tasks, which consist of one control and three different activation tasks, were performed on six right-handed normal volunteers for 2 minutes after bolus injections of 925 MBq $H_2^{15}O$ at the intervals of 30 minutes. Image data were analyzed using SPM96 (Statistical Parametric Mapping) implemented with Matlab (Mathworks Inc., U.S.A.). Images from the same subject were spatially registered and were normalized using linear and nonlinear transformation methods. Significant difference between control and each activation state was estimated at every voxel based on the general linear model. Differences of global counts were removed using analysis of covariance (ANCOVA) with global activity as covariate. Using the mean and variance for each condition which was adjusted using ANCOVA, t-statistics was performed on every voxel To interpret the results more easily, t-values were transformed to the standard Gaussian distribution (Z-score). Results: All the subjects carried out the activation and control tests successfully. Average rate of correct answers was 95%. The numbers of activated blobs were 4 for verbal memory I, 9 for verbal memory II, 9 for visual memory, and 6 for conjunctive activation of these three tasks. The verbal working memory activates predominantly left-sided structures, and the visual memory activates the right hemisphere. Conclusion: We conclude that rCBF PET imaging and statistical parametric mapping method were useful in the localization of the brain regions for verbal and visual working memory.

  • PDF