• 제목/요약/키워드: bayesian test

검색결과 243건 처리시간 0.027초

소량 및 불균형 능동소나 데이터세트에 대한 딥러닝 기반 표적식별기의 종합적인 분석 (Comprehensive analysis of deep learning-based target classifiers in small and imbalanced active sonar datasets)

  • 김근환;황용상;신성진;김주호;황수복;추영민
    • 한국음향학회지
    • /
    • 제42권4호
    • /
    • pp.329-344
    • /
    • 2023
  • 본 논문에서는 소량 및 불균형 능동소나 데이터세트에 적용된 다양한 딥러닝 기반 표적식별기의 일반화 성능을 종합적으로 분석하였다. 서로 다른 시간과 해역에서 수집된 능동소나 실험 데이터를 이용하여 두 가지 능동소나 데이터세트를 생성하였다. 데이터세트의 각 샘플은 탐지 처리 이후 탐지된 오디오 신호로부터 추출된 시간-주파수 영역 이미지이다. 표적식별기의 신경망 모델은 다양한 구조를 가지는 22개의 Convolutional Neural Networks(CNN) 모델을 사용하였다. 실험에서 두 가지 데이터세트는 학습/검증 데이터세트와 테스트 데이터세트로 번갈아 가며 사용되었으며, 표적식별기 출력의 변동성을 계산하기 위해 학습/검증/테스트를 10번 반복하고 표적식별 성능을 분석하였다. 이때 학습을 위한 초매개변수는 베이지안 최적화를 이용하여 최적화하였다. 실험 결과 본 논문에서 설계한 얕은 층을 가지는 CNN 모델이 대부분의 깊은 층을 가지는 CNN 모델보다 견실하면서 우수한 일반화 성능을 가지는 것을 확인하였다. 본 논문은 향후 딥러닝 기반 능동소나 표적식별 연구에 대한 방향성을 설정할 때 유용하게 사용될 수 있다.

Skin Pigment Recognition using Projective Hemoglobin- Melanin Coordinate Measurements

  • Yang, Liu;Lee, Suk-Hwan;Kwon, Seong-Geun;Song, Ha-Joo;Kwon, Ki-Ryong
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권6호
    • /
    • pp.1825-1838
    • /
    • 2016
  • The detection of skin pigment is crucial in the diagnosis of skin diseases and in the evaluation of medical cosmetics and hairdressing. Accuracy in the detection is a basis for the prompt cure of skin diseases. This study presents a method to recognize and measure human skin pigment using Hemoglobin-Melanin (HM) coordinate. The proposed method extracts the skin area through a Gaussian skin-color model estimated from statistical analysis and decomposes the skin area into two pigments of hemoglobin and melanin using an Independent Component Analysis (ICA) algorithm. Then, we divide the two-dimensional (2D) HM coordinate into rectangular bins and compute the location histograms of hemoglobin and melanin for all the bins. We label the skin pigment of hemoglobin, melanin, and normal skin on all bins according to the Bayesian classifier. These bin-based HM projective histograms can quantify the skin pigment and compute the standard deviation on the total quantification of skin pigments surrounding normal skin. We tested our scheme using images taken under different illumination conditions. Several cosmetic coverings were used to test the performance of the proposed method. The experimental results show that the proposed method can detect skin pigments with more accuracy and evaluate cosmetic covering effects more effectively than conventional methods.

사후확률에 기반한 근사 규칙의 생성 (Creation of Approximate Rules based on Posterior Probability)

  • 박인규;최규석
    • 한국인터넷방송통신학회논문지
    • /
    • 제15권5호
    • /
    • pp.69-74
    • /
    • 2015
  • 본 논문에서는 데이터베이스의 정보시스템을 구성하는 속성을 감축하여 빠른 검색을 보장하는 제어규칙의 생성에 관한 연구이다. 일반적으로 정보시스템에는 불필요한 많은 속성들이 존재하고 있다. 이때 정보시스템의 객체들이 비일관적일 경우에는 응답의 정확성을 기대하기 어렵게 된다. 그러므로 본 논문에서는 러프엔트로피의 개념과 베이지언 사후확률을 적용하여 불필요한 속성을 제거하여 정보시스템을 간결화 하는데 주안점을 두었다. 제안된 알고리즘에서는 러프이론에 기반한 최적의 리덕트를 생성하는 과정에서 사후확률을 적용하여 결정속성에 대한 조건속성의 함의를 러프엔트로피의 척도로 비교하여 영향력이 약한 속성을 제거하여 제어규칙을 간결하게 표현할 수 있다. 제안된 알고리즘을 신입사원의 채용에 적용하여 지식감축의 효용성을 보인다.

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • 제24권4호
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.

Model selection algorithm in Gaussian process regression for computer experiments

  • Lee, Youngsaeng;Park, Jeong-Soo
    • Communications for Statistical Applications and Methods
    • /
    • 제24권4호
    • /
    • pp.383-396
    • /
    • 2017
  • The model in our approach assumes that computer responses are a realization of a Gaussian processes superimposed on a regression model called a Gaussian process regression model (GPRM). Selecting a subset of variables or building a good reduced model in classical regression is an important process to identify variables influential to responses and for further analysis such as prediction or classification. One reason to select some variables in the prediction aspect is to prevent the over-fitting or under-fitting to data. The same reasoning and approach can be applicable to GPRM. However, only a few works on the variable selection in GPRM were done. In this paper, we propose a new algorithm to build a good prediction model among some GPRMs. It is a post-work of the algorithm that includes the Welch method suggested by previous researchers. The proposed algorithms select some non-zero regression coefficients (${\beta}^{\prime}s$) using forward and backward methods along with the Lasso guided approach. During this process, the fixed were covariance parameters (${\theta}^{\prime}s$) that were pre-selected by the Welch algorithm. We illustrated the superiority of our proposed models over the Welch method and non-selection models using four test functions and one real data example. Future extensions are also discussed.

다기준 의사 결정 방법을 이용한 모바일 환경에서의 정보추천 (Information Recommendation in Mobile Environment using a Multi-Criteria Decision Making)

  • 박한샘;박문희;조성배
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제14권3호
    • /
    • pp.306-310
    • /
    • 2008
  • 정보추천 서비스를 위한 선호도는 상황에 따라 달라질 수 있으므로, 정보추천 서비스를 제공하기 위해서는 먼저 사용자의 컨덱스트 정보를 알아야 한다. 본 논문은 모바일 환경에서 다수 사용자의 선호도를 고려한 추천 시스템을 제안하며, 음식점 추천에 이를 적용하고자 한다. 모바일 환경에서 개별 사용자의 선호도를 모델링하기 위해 베이지안 네트워크를 사용하였으며, 음식점 추천은 많은 경우 개별 사용자가 아닌 다수 사용자의 선호도를 고려해야 하므로, 본 논문에서는 개별 사용자의 선호도를 바탕으로 다수의 선호도를 획득하기 위해 다기준 의사결정방법인 AHP를 이용하였다. 실험을 위해서 10가지 서로 다른 상황에서 추천을 수행하였으며, 마지막으로 SUS 사용성 평가를 통해 제안하는 시스템의 사용성이 높게 평가되었음을 확인하였다.

Self-adaptive sampling for sequential surrogate modeling of time-consuming finite element analysis

  • Jin, Seung-Seop;Jung, Hyung-Jo
    • Smart Structures and Systems
    • /
    • 제17권4호
    • /
    • pp.611-629
    • /
    • 2016
  • This study presents a new approach of surrogate modeling for time-consuming finite element analysis. A surrogate model is widely used to reduce the computational cost under an iterative computational analysis. Although a variety of the methods have been widely investigated, there are still difficulties in surrogate modeling from a practical point of view: (1) How to derive optimal design of experiments (i.e., the number of training samples and their locations); and (2) diagnostics of the surrogate model. To overcome these difficulties, we propose a sequential surrogate modeling based on Gaussian process model (GPM) with self-adaptive sampling. The proposed approach not only enables further sampling to make GPM more accurate, but also evaluates the model adequacy within a sequential framework. The applicability of the proposed approach is first demonstrated by using mathematical test functions. Then, it is applied as a substitute of the iterative finite element analysis to Monte Carlo simulation for a response uncertainty analysis under correlated input uncertainties. In all numerical studies, it is successful to build GPM automatically with the minimal user intervention. The proposed approach can be customized for the various response surfaces and help a less experienced user save his/her efforts.

RAM 분석 정확도 향상을 위한 야전운용 데이터의 이상값과 결측값 처리 방안 (Method of Processing the Outliers and Missing Values of Field Data to Improve RAM Analysis Accuracy)

  • 김인석;정원
    • 한국신뢰성학회지:신뢰성응용연구
    • /
    • 제17권3호
    • /
    • pp.264-271
    • /
    • 2017
  • Purpose: Field operation data contains missing values or outliers due to various causes of the data collection process, so caution is required when utilizing RAM analysis results by field operation data. The purpose of this study is to present a method to minimize the RAM analysis error of the field data to improve the accuracy. Methods: Statistical methods are presented for processing of the outliers and the missing values of the field operating data, and after analyzing the RAM, the differences between before and after applying the technique are discussed. Results: The availability is estimated to be lower by 6.8 to 23.5% than that before processing, and it is judged that the processing of the missing values and outliers greatly affect the RAM analysis result. Conclusion: RAM analysis of OO weapon system was performed and suggestions for improvement of RAM analysis were presented through comparison with the new and current method. Data analysis results without appropriate treatment of error values may result in incorrect conclusions leading to inappropriate decisions and actions.

Testing Gravity with Cosmic Shear Data from the Deep Lens Survey

  • Sabiu, Cristiano G.;Yoon, Mijin;Jee, Myungkook James
    • 천문학회보
    • /
    • 제43권2호
    • /
    • pp.40.4-41
    • /
    • 2018
  • The current 'standard model' of cosmology provides a minimal theoretical framework that can explain the gaussian, nearly scale-invariant density perturbations observed in the CMB to the late time clustering of galaxies. However accepting this framework, requires that we include within our cosmic inventory a vacuum energy that is ~122 orders of magnitude lower than Quantum Mechanical predictions, or alternatively a new scalar field (dark energy) that has negative pressure. An alternative approach to adding extra components to the Universe would be to modify the equations of Gravity. Although GR is supported by many current observations there are still alternative models that can be considered. Recently there have been many works attempting to test for modified gravity using the large scale clustering of galaxies, ISW, cluster abundance, RSD, 21cm observations, and weak lensing. In this work, we compare various modified gravity models using cosmic shear data from the Deep Lens Survey as well as data from CMB, SNe Ia, and BAO. We use the Bayesian Evidence to quantify the comparison robustly, which naturally penalizes complex models with weak data support. In this talk we present our methodology and preliminary results that show f(R) gravity is mildly disfavoured by the data.

  • PDF

발틱운임지수(BDI)와 해상 물동량의 인과성 검정 (Analysis of causality of Baltic Drybulk index (BDI) and maritime trade volume)

  • 배성훈;박근식
    • 무역학회지
    • /
    • 제44권2호
    • /
    • pp.127-141
    • /
    • 2019
  • In this study, the relationship between Baltic Dry Index(BDI) and maritime trade volume in the dry cargo market was verified using the vector autoregressive (VAR) model. Data was analyzed from 1992 to 2018 for iron ore, steam coal, coking coal, grain, and minor bulks of maritime trade volume and BDI. Granger causality analysis showed that the BDI affects the trade volume of coking coal and minor bulks but the trade volume of iron ore, steam coal and grain do not correlate with the BDI freight index. Impulse response analysis showed that the shock of BDI had the greatest impact on coking coal at the two years lag and the impact was negligible at the ten years lag. In addition, the shock of BDI on minor cargoes was strongest at the three years lag, and were negligible at the ten years lag. This study examined the relationship between maritime trade volume and BDI in the dry bulk shipping market in which uncertainty is high. As a result of this study, there is an economic aspect of sustainability that has helped the risk management of shipping companies. In addition, it is significant from an academic point of view that the long-term relationship between the two time series was analyzed through the causality test between variables. However, it is necessary to develop a forecasting model that will help decision makers in maritime markets using more sophisticated methods such as the Bayesian VAR model.