• Title/Summary/Keyword: kernel-based method

Search Result 474, Processing Time 0.028 seconds

Development of Radiation Dose Assessment Algorithm for Arbitrary Geometry Radiation Source Based on Point-kernel Method (Point-kernel 방법론 기반 임의 형태 방사선원에 대한 외부피폭 방사선량 평가 알고리즘 개발)

  • Ju Young Kim;Min Seong Kim;Ji Woo Kim;Kwang Pyo Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.3
    • /
    • pp.275-282
    • /
    • 2023
  • Workers in nuclear power plants are likely to be exposed to radiation from various geometrical sources. In order to evaluate the exposure level, the point-kernel method can be utilized. In order to perform a dose assessment based on this method, the radiation source should be divided into point sources, and the number of divisions should be set by the evaluator. However, for the general public, there may be difficulties in selecting the appropriate number of divisions and performing an evaluation. Therefore, the purpose of this study is to develop an algorithm for dose assessment for arbitrary shaped sources based on the point-kernel method. For this purpose, the point-kernel method was analyzed and the main factors for the dose assessment were selected. Subsequently, based on the analyzed methodology, a dose assessment algorithm for arbitrary shaped sources was developed. Lastly, the developed algorithm was verified using Microshield. The dose assessment procedure of the developed algorithm consisted of 1) boundary space setting step, 2) source grid division step, 3) the set of point sources generation step, and 4) dose assessment step. In the boundary space setting step, the boundaries of the space occupied by the sources are set. In the grid division step, the boundary space is divided into several grids. In the set of point sources generation step, the coordinates of the point sources are set by considering the proportion of sources occupying each grid. Finally, in the dose assessment step, the results of the dose assessments for each point source are summed up to derive the dose rate. In order to verify the developed algorithm, the exposure scenario was established based on the standard exposure scenario presented by the American National Standards Institute. The results of the evaluation with the developed algorithm and Microshield were compare. The results of the evaluation with the developed algorithm showed a range of 1.99×10-1~9.74×10-1 μSv hr-1, depending on the distance and the error between the results of the developed algorithm and Microshield was about 0.48~6.93%. The error was attributed to the difference in the number of point sources and point source distribution between the developed algorithm and the Microshield. The results of this study can be utilized for external exposure radiation dose assessments based on the point-kernel method.

A note on SVM estimators in RKHS for the deconvolution problem

  • Lee, Sungho
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.1
    • /
    • pp.71-83
    • /
    • 2016
  • In this paper we discuss a deconvolution density estimator obtained using the support vector machines (SVM) and Tikhonov's regularization method solving ill-posed problems in reproducing kernel Hilbert space (RKHS). A remarkable property of SVM is that the SVM leads to sparse solutions, but the support vector deconvolution density estimator does not preserve sparsity as well as we expected. Thus, in section 3, we propose another support vector deconvolution estimator (method II) which leads to a very sparse solution. The performance of the deconvolution density estimators based on the support vector method is compared with the classical kernel deconvolution density estimator for important cases of Gaussian and Laplacian measurement error by means of a simulation study. In the case of Gaussian error, the proposed support vector deconvolution estimator shows the same performance as the classical kernel deconvolution density estimator.

On Improving Resolution of Time-Frequency Representation of Speech Signals Based on Frequency Modulation Type Kernel (FM변조된 형태의 Kernel을 사용한 음성신호의 시간-주파수 표현 해상도 향상에 관한 연구)

  • Lee, He-Young;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.17-29
    • /
    • 2005
  • Time-frequency representation reveals some useful information about instantaneous frequency, instantaneous bandwidth and boundary of each AM-FM component of a speech signal. In many cases, the instantaneous frequency of each component is not constant. The variability of instantaneous frequency causes degradation of resolution in time-frequency representation. This paper presents a method of adaptively adjusting the transform kernel for preventing degradation of resolution due to time-varying instantaneous frequency. The transform kernel is the form of frequency modulated function. The modulation function in the transform kernel is determined by the estimate of instantaneous frequency which is approximated by first order polynomial at each time instance. Also, the window function is modulated by the estimated instantaneous. frequency for mitigation of fringing. effect. In the proposed method, not only the transform kernel but also the shape and the length of. the window function are adaptively adjusted by the instantaneous frequency of a speech signal.

  • PDF

Power Quality Disturbances Identification Method Based on Novel Hybrid Kernel Function

  • Zhao, Liquan;Gai, Meijiao
    • Journal of Information Processing Systems
    • /
    • v.15 no.2
    • /
    • pp.422-432
    • /
    • 2019
  • A hybrid kernel function of support vector machine is proposed to improve the classification performance of power quality disturbances. The kernel function mathematical model of support vector machine directly affects the classification performance. Different types of kernel functions have different generalization ability and learning ability. The single kernel function cannot have better ability both in learning and generalization. To overcome this problem, we propose a hybrid kernel function that is composed of two single kernel functions to improve both the ability in generation and learning. In simulations, we respectively used the single and multiple power quality disturbances to test classification performance of support vector machine algorithm with the proposed hybrid kernel function. Compared with other support vector machine algorithms, the improved support vector machine algorithm has better performance for the classification of power quality signals with single and multiple disturbances.

Elongated Radial Basis Function for Nonlinear Representation of Face Data

  • Kim, Sang-Ki;Yu, Sun-Jin;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.7C
    • /
    • pp.428-434
    • /
    • 2011
  • Recently, subspace analysis has raised its performance to a higher level through the adoption of kernel-based nonlinearity. Especially, the radial basis function, based on its nonparametric nature, has shown promising results in face recognition. However, due to the endemic small sample size problem of face data, the conventional kernel-based feature extraction methods have difficulty in data representation. In this paper, we introduce a novel variant of the RBF kernel to alleviate this problem. By adopting the concept of the nearest feature line classifier, we show both effectiveness and generalizability of the proposed method, particularly regarding the small sample size issue.

Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method (크리깅 근사모델 기반의 중요도 추출법을 이용한 고장확률 계산 방안)

  • Lee, Seunggyu;Kim, Jae Hoon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.5
    • /
    • pp.381-389
    • /
    • 2017
  • The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

Analysis of Bulk Metal Forming Process by Reproducing Kernel Particle Method (재생커널입자법을 이용한 체적성형공정의 해석)

  • Han, Kyu-Taek
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.8 no.3
    • /
    • pp.21-26
    • /
    • 2009
  • The finite element analysis of metal forming processes often fails because of severe mesh distortion at large deformation. As the concept of meshless methods, only nodal point data are used for modeling and solving. As the main feature of these methods, the domain of the problem is represented by a set of nodes, and a finite element mesh is unnecessary. This computational methods reduces time-consuming model generation and refinement effort. It provides a higher rate of convergence than the conventional finite element methods. The displacement shape functions are constructed by the reproducing kernel approximation that satisfies consistency conditions. In this research, A meshless method approach based on the reproducing kernel particle method (RKPM) is applied with metal forming analysis. Numerical examples are analyzed to verify the performance of meshless method for metal forming analysis.

  • PDF

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.

An Automatic Diagnosis System for Hepatitis Diseases Based on Genetic Wavelet Kernel Extreme Learning Machine

  • Avci, Derya
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.4
    • /
    • pp.993-1002
    • /
    • 2016
  • Hepatitis is a major public health problem all around the world. This paper proposes an automatic disease diagnosis system for hepatitis based on Genetic Algorithm (GA) Wavelet Kernel (WK) Extreme Learning Machines (ELM). The classifier used in this paper is single layer neural network (SLNN) and it is trained by ELM learning method. The hepatitis disease datasets are obtained from UCI machine learning database. In Wavelet Kernel Extreme Learning Machine (WK-ELM) structure, there are three adjustable parameters of wavelet kernel. These parameters and the numbers of hidden neurons play a major role in the performance of ELM. Therefore, values of these parameters and numbers of hidden neurons should be tuned carefully based on the solved problem. In this study, the optimum values of these parameters and the numbers of hidden neurons of ELM were obtained by using Genetic Algorithm (GA). The performance of proposed GA-WK-ELM method is evaluated using statical methods such as classification accuracy, sensitivity and specivity analysis and ROC curves. The results of the proposed GA-WK-ELM method are compared with the results of the previous hepatitis disease studies using same database as well as different database. When previous studies are investigated, it is clearly seen that the high classification accuracies have been obtained in case of reducing the feature vector to low dimension. However, proposed GA-WK-ELM method gives satisfactory results without reducing the feature vector. The calculated highest classification accuracy of proposed GA-WK-ELM method is found as 96.642 %.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.