• 제목/요약/키워드: binary cross entropy

검색결과 12건 처리시간 0.023초

Discriminant Analysis of Binary Data with Multinomial Distribution by Using the Iterative Cross Entropy Minimization Estimation

  • Lee Jung Jin
    • Communications for Statistical Applications and Methods
    • /
    • 제12권1호
    • /
    • pp.125-137
    • /
    • 2005
  • Many discriminant analysis models for binary data have been used in real applications, but none of the classification models dominates in all varying circumstances(Asparoukhov & Krzanowski(2001)). Lee and Hwang (2003) proposed a new classification model by using multinomial distribution with the maximum entropy estimation method. The model showed some promising results in case of small number of variables, but its performance was not satisfactory for large number of variables. This paper explores to use the iterative cross entropy minimization estimation method in replace of the maximum entropy estimation. Simulation experiments show that this method can compete with other well known existing classification models.

Comparison of Different CNN Models in Tuberculosis Detecting

  • Liu, Jian;Huang, Yidi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3519-3533
    • /
    • 2020
  • Tuberculosis is a chronic and delayed infection which is easily experienced by young people. According to the statistics of the World Health Organization (WHO), there are nearly ten million fell ill with tuberculosis and a total of 1.5 million people died from tuberculosis in 2018 (including 251000 people with HIV). Tuberculosis is the largest single infectious pathogen that leads to death. In order to help doctors with tuberculosis diagnosis, we compare the tuberculosis classification abilities of six popular convolutional neural network (CNN) models in the same data set to find the best model. Before training, we optimize three parts of CNN to achieve better results. We employ sigmoid function to replace the step function as the activation function. What's more, we use binary cross entropy function as the cost function to replace traditional quadratic cost function. Finally, we choose stochastic gradient descent (SGD) as gradient descent algorithm. From the results of our experiments, we find that Densenet121 is most suitable for tuberculosis diagnosis and achieve a highest accuracy of 0.835. The optimization and expansion depend on the increase of data set and the improvements of Densenet121.

Fixed size LS-SVM for multiclassification problems of large data sets

  • Hwang, Hyung-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권3호
    • /
    • pp.561-567
    • /
    • 2010
  • Multiclassification is typically performed using voting scheme methods based on combining a set of binary classifications. In this paper we use multiclassification method with a hat matrix of least squares support vector machine (LS-SVM), which can be regarded as the revised one-against-all method. To tackle multiclass problems for large data, we use the $Nystr\ddot{o}m$ approximation and the quadratic Renyi entropy with estimation in the primal space such as used in xed size LS-SVM. For the selection of hyperparameters, generalized cross validation techniques are employed. Experimental results are then presented to indicate the performance of the proposed procedure.

라벨 스무딩을 활용한 치은염 이진 분류기 캘리브레이션 (Calibration for Gingivitis Binary Classifier via Epoch-wise Decaying Label-Smoothing)

  • 이상현
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.594-596
    • /
    • 2021
  • Future healthcare systems will heavily rely on ill-labeled data due to scarcity of the experts who are trained enough to label the data. Considering the contamination of the dataset, it is not desirable to make the neural network being overconfident to the dataset, but rather giving them some margins for the prediction is preferable. In this paper, we propose a novel epoch-wise decaying label-smoothing function to alleviate the model over-confidency, and it outperforms the neural network trained with conventional cross entropy by 6.0%.

  • PDF

Decision Tree를 이용한 효과적인 유방암 진단 (Effective Diagnostic Method Of Breast Cancer Data Using Decision Tree)

  • 정용규;이승호;성호중
    • 한국인터넷방송통신학회논문지
    • /
    • 제10권5호
    • /
    • pp.57-62
    • /
    • 2010
  • 최근 의료분야에서는 대규모의 데이터를 빠르게 검색 및 추출이 가능하게 의사결정트리 기법에 대한 연구들이 진행되고 있다. 현재 CART, C4.5, CHAID 등 여러 기법이 개발되었는데, 이러한 클레시파이 기법들은 몇몇 의사결정 나무 알고리즘이 이진분리로 분류를 하는데, 나머지 데이터의 결과가 손실될 우려가 있다. 그중 C4.5는 엔트로피의 측정값에 높고 낮음으로 트리 모양을 구성해 가는 방식이고, CART 알고리즘은 엔트로피 매트릭스를 사용하여 범주형 자료나 연속형 자료에 적용할수가 있다. 이에 본 논문에서는 클래시파이 기법 중 C4.5와 CART를 유방암 환자 데이터에 대해 적용하여 실험하여, 그 결과 분석을 통한 성능 평가를 수행하였다. 실험에서는 교차검증을 통해 그 결과에 대한 정확성을 측정하였다.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

Precise segmentation of fetal head in ultrasound images using improved U-Net model

  • Vimala Nagabotu;Anupama Namburu
    • ETRI Journal
    • /
    • 제46권3호
    • /
    • pp.526-537
    • /
    • 2024
  • Monitoring fetal growth in utero is crucial to anomaly diagnosis. However, current computer-vision models struggle to accurately assess the key metrics (i.e., head circumference and occipitofrontal and biparietal diameters) from ultrasound images, largely owing to a lack of training data. Mitigation usually entails image augmentation (e.g., flipping, rotating, scaling, and translating). Nevertheless, the accuracy of our task remains insufficient. Hence, we offer a U-Net fetal head measurement tool that leverages a hybrid Dice and binary cross-entropy loss to compute the similarity between actual and predicted segmented regions. Ellipse-fitted two-dimensional ultrasound images acquired from the HC18 dataset are input, and their lower feature layers are reused for efficiency. During regression, a novel region of interest pooling layer extracts elliptical feature maps, and during segmentation, feature pyramids fuse field-layer data with a new scale attention method to reduce noise. Performance is measured by Dice similarity, mean pixel accuracy, and mean intersection-over-union, giving 97.90%, 99.18%, and 97.81% scores, respectively, which match or outperform the best U-Net models.

손실함수의 특성에 따른 UNet++ 모델에 의한 변화탐지 결과 분석 (Analysis of Change Detection Results by UNet++ Models According to the Characteristics of Loss Function)

  • 정미라;최호성;최재완
    • 대한원격탐사학회지
    • /
    • 제36권5_2호
    • /
    • pp.929-937
    • /
    • 2020
  • 본 논문에서는 의미론적 분할을 위한 딥러닝 기술 중의 하나인 UNet++ 모델을 이용하여 다시기 위성영상의 변화지역을 탐지하고자 하였다. 다양한 손실함수에 대한 학습결과를 분석하기 위하여, 이진 교차 엔트로피, 자카드 변수에 의하여 학습된 UNet++ 모델에 의한 변화탐지 결과를 평가하였다. 또한, 딥러닝 모델의 결과는 WorldView-3 위성영상을 활용하여 기존의 화소기반 변화탐지 기법의 결과와 비교하여 평가하였다. 실험결과, 손실함수의 특성에 따라서 딥러닝 모델의 성능이 달라질 수 있음을 확인하였으나, 기존 기법들과 비교하여 우수한 결과를 나타내는 것도 확인하였다.

Adaptive Attention Annotation Model: Optimizing the Prediction Path through Dependency Fusion

  • Wang, Fangxin;Liu, Jie;Zhang, Shuwu;Zhang, Guixuan;Zheng, Yang;Li, Xiaoqian;Liang, Wei;Li, Yuejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권9호
    • /
    • pp.4665-4683
    • /
    • 2019
  • Previous methods build image annotation model by leveraging three basic dependencies: relations between image and label (image/label), between images (image/image) and between labels (label/label). Even though plenty of researches show that multiple dependencies can work jointly to improve annotation performance, different dependencies actually do not "work jointly" in their diagram, whose performance is largely depending on the result predicted by image/label section. To address this problem, we propose the adaptive attention annotation model (AAAM) to associate these dependencies with the prediction path, which is composed of a series of labels (tags) in the order they are detected. In particular, we optimize the prediction path by detecting the relevant labels from the easy-to-detect to the hard-to-detect, which are found using Binary Cross-Entropy (BCE) and Triplet Margin (TM) losses, respectively. Besides, in order to capture the inforamtion of each label, instead of explicitly extracting regional featutres, we propose the self-attention machanism to implicitly enhance the relevant region and restrain those irrelevant. To validate the effective of the model, we conduct experiments on three well-known public datasets, COCO 2014, IAPR TC-12 and NUSWIDE, and achieve better performance than the state-of-the-art methods.

One-step deep learning-based method for pixel-level detection of fine cracks in steel girder images

  • Li, Zhihang;Huang, Mengqi;Ji, Pengxuan;Zhu, Huamei;Zhang, Qianbing
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.153-166
    • /
    • 2022
  • Identifying fine cracks in steel bridge facilities is a challenging task of structural health monitoring (SHM). This study proposed an end-to-end crack image segmentation framework based on a one-step Convolutional Neural Network (CNN) for pixel-level object recognition with high accuracy. To particularly address the challenges arising from small object detection in complex background, efforts were made in loss function selection aiming at sample imbalance and module modification in order to improve the generalization ability on complicated images. Specifically, loss functions were compared among alternatives including the Binary Cross Entropy (BCE), Focal, Tversky and Dice loss, with the last three specialized for biased sample distribution. Structural modifications with dilated convolution, Spatial Pyramid Pooling (SPP) and Feature Pyramid Network (FPN) were also performed to form a new backbone termed CrackDet. Models of various loss functions and feature extraction modules were trained on crack images and tested on full-scale images collected on steel box girders. The CNN model incorporated the classic U-Net as its backbone, and Dice loss as its loss function achieved the highest mean Intersection-over-Union (mIoU) of 0.7571 on full-scale pictures. In contrast, the best performance on cropped crack images was achieved by integrating CrackDet with Dice loss at a mIoU of 0.7670.