• Title/Summary/Keyword: Ground-Truth

Search Result 299, Processing Time 0.024 seconds

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

Contextual Classifier with the Context Probability as a Weighting Function (Context Probability를 Weighting Function으로 사용한 Contextual Classifier)

  • 노준경;박규호;김명환
    • Korean Journal of Remote Sensing
    • /
    • v.2 no.1
    • /
    • pp.3-11
    • /
    • 1986
  • The current methods of estimating contest distribution function in contextual clarifier are to "classify and count", GTGM (ground-truth-guided-method) and unbiased estimator. In this paper we propose a new contextual classifier echoes context distribution is replaced by context probability that is estimated from transition probability. The classification accuracy increases considerably compared with the classical one.

Using SG Arrays for Hydrology in Comparison with GRACE Satellite Data, with Extension to Seismic and Volcanic Hazards

  • Crossley David;Hinderer Jacques
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.1
    • /
    • pp.31-49
    • /
    • 2005
  • We first review some history of the Global Geodynamics Project (GGP), particularly in the progress of ground-satellite gravity comparisons. The GGP Satellite Project has involved the measurement of ground-based superconducting gravimeters (SGs) in Europe for several years and we make quantitative comparisons with the latest satellite GRACE data and hydrological models. The primary goal is to recover information about seasonal hydrology cycles, and we find a good correlation at the microgal level between the data and modeling. One interesting feature of the data is low soil moisture resulting from the European heat wave in 2003. An issue with the ground-based stations is the possibility of mass variations in the soil above a station, and particularly for underground stations these have to be modeled precisely. Based on this work with a regional array, we estimate the effectiveness of future SG arrays to measure co-seismic deformation and silent-slip events. Finally we consider gravity surveys in volcanic areas, and predict the accuracy in modeling subsurface density variations over time periods from months to years.

Learning-based Inertial-wheel Odometry for a Mobile Robot (모바일 로봇을 위한 학습 기반 관성-바퀴 오도메트리)

  • Myeongsoo Kim;Keunwoo Jang;Jaeheung Park
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.427-435
    • /
    • 2023
  • This paper proposes a method of estimating the pose of a mobile robot by using a learning model. When estimating the pose of a mobile robot, wheel encoder and inertial measurement unit (IMU) data are generally utilized. However, depending on the condition of the ground surface, slip occurs due to interaction between the wheel and the floor. In this case, it is hard to predict pose accurately by using only encoder and IMU. Thus, in order to reduce pose error even in such conditions, this paper introduces a pose estimation method based on a learning model using data of the wheel encoder and IMU. As the learning model, long short-term memory (LSTM) network is adopted. The inputs to LSTM are velocity and acceleration data from the wheel encoder and IMU. Outputs from network are corrected linear and angular velocity. Estimated pose is calculated through numerically integrating output velocities. Dataset used as ground truth of learning model is collected in various ground conditions. Experimental results demonstrate that proposed learning model has higher accuracy of pose estimation than extended Kalman filter (EKF) and other learning models using the same data under various ground conditions.

Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks

  • Thanathornwong, Bhornsawan;Suebnukarn, Siriwan
    • Imaging Science in Dentistry
    • /
    • v.50 no.2
    • /
    • pp.169-174
    • /
    • 2020
  • Purpose: Periodontal disease causes tooth loss and is associated with cardiovascular diseases, diabetes, and rheumatoid arthritis. The present study proposes using a deep learning-based object detection method to identify periodontally compromised teeth on digital panoramic radiographs. A faster regional convolutional neural network (faster R-CNN) which is a state-of-the-art deep detection network, was adapted from the natural image domain using a small annotated clinical data- set. Materials and Methods: In total, 100 digital panoramic radiographs of periodontally compromised patients were retrospectively collected from our hospital's information system and augmented. The periodontally compromised teeth found in each image were annotated by experts in periodontology to obtain the ground truth. The Keras library, which is written in Python, was used to train and test the model on a single NVidia 1080Ti GPU. The faster R-CNN model used a pretrained ResNet architecture. Results: The average precision rate of 0.81 demonstrated that there was a significant region of overlap between the predicted regions and the ground truth. The average recall rate of 0.80 showed that the periodontally compromised teeth regions generated by the detection method excluded healthiest teeth areas. In addition, the model achieved a sensitivity of 0.84, a specificity of 0.88 and an F-measure of 0.81. Conclusion: The faster R-CNN trained on a limited amount of labeled imaging data performed satisfactorily in detecting periodontally compromised teeth. The application of a faster R-CNN to assist in the detection of periodontally compromised teeth may reduce diagnostic effort by saving assessment time and allowing automated screening documentation.

Application of UAV-based RGB Images for the Growth Estimation of Vegetable Crops

  • Kim, Dong-Wook;Jung, Sang-Jin;Kwon, Young-Seok;Kim, Hak-Jin
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.45-45
    • /
    • 2017
  • On-site monitoring of vegetable growth parameters, such as leaf length, leaf area, and fresh weight, in an agricultural field can provide useful information for farmers to establish farm management strategies suitable for optimum production of vegetables. Unmanned Aerial Vehicles (UAVs) are currently gaining a growing interest for agricultural applications. This study reports on validation testing of previously developed vegetable growth estimation models based on UAV-based RGB images for white radish and Chinese cabbage. Specific objective was to investigate the potential of the UAV-based RGB camera system for effectively quantifying temporal and spatial variability in the growth status of white radish and Chinese cabbage in a field. RGB images were acquired based on an automated flight mission with a multi-rotor UAV equipped with a low-cost RGB camera while automatically tracking on a predefined path. The acquired images were initially geo-located based on the log data of flight information saved into the UAV, and then mosaicked using a commerical image processing software. Otsu threshold-based crop coverage and DSM-based crop height were used as two predictor variables of the previously developed multiple linear regression models to estimate growth parameters of vegetables. The predictive capabilities of the UAV sensing system for estimating the growth parameters of the two vegetables were evaluated quantitatively by comparing to ground truth data. There were highly linear relationships between the actual and estimated leaf lengths, widths, and fresh weights, showing coefficients of determination up to 0.7. However, there were differences in slope between the ground truth and estimated values lower than 0.5, thereby requiring the use of a site-specific normalization method.

  • PDF

Land Cover Classification over East Asian Region Using Recent MODIS NDVI Data (2006-2008) (최근 MODIS 식생지수 자료(2006-2008)를 이용한 동아시아 지역 지면피복 분류)

  • Kang, Jeon-Ho;Suh, Myoung-Seok;Kwak, Chong-Heum
    • Atmosphere
    • /
    • v.20 no.4
    • /
    • pp.415-426
    • /
    • 2010
  • A Land cover map over East Asian region (Kongju national university Land Cover map: KLC) is classified by using support vector machine (SVM) and evaluated with ground truth data. The basic input data are the recent three years (2006-2008) of MODIS (MODerate Imaging Spectriradiometer) NDVI (normalized difference vegetation index) data. The spatial resolution and temporal frequency of MODIS NDVI are 1km and 16 days, respectively. To minimize the number of cloud contaminated pixels in the MODIS NDVI data, the maximum value composite is applied to the 16 days data. And correction of cloud contaminated pixels based on the spatiotemporal continuity assumption are applied to the monthly NDVI data. To reduce the dataset and improve the classification quality, 9 phenological data, such as, NDVI maximum, amplitude, average, and others, derived from the corrected monthly NDVI data. The 3 types of land cover maps (International Geosphere Biosphere Programme: IGBP, University of Maryland: UMd, and MODIS) were used to build up a "quasi" ground truth data set, which were composed of pixels where the three land cover maps classified as the same land cover type. The classification results show that the fractions of broadleaf trees and grasslands are greater, but those of the croplands and needleleaf trees are smaller compared to those of the IGBP or UMd. The validation results using in-situ observation database show that the percentages of pixels in agreement with the observations are 80%, 77%, 63%, 57% in MODIS, KLC, IGBP, UMd land cover data, respectively. The significant differences in land cover types among the MODIS, IGBP, UMd and KLC are mainly occurred at the southern China and Manchuria, where most of pixels are contaminated by cloud and snow during summer and winter, respectively. It shows that the quality of raw data is one of the most important factors in land cover classification.

Triangulation Based Skeletonization and Trajectory Recovery for Handwritten Character Patterns

  • Phan, Dung;Na, In-Seop;Kim, Soo-Hyung;Lee, Guee-Sang;Yang, Hyung-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.358-377
    • /
    • 2015
  • In this paper, we propose a novel approach for trajectory recovery. Our system uses a triangulation procedure for skeletonization and graph theory to extract the trajectory. Skeletonization extracts the polyline skeleton according to the polygonal contours of the handwritten characters, and as a result, the junction becomes clear and the characters that are touching each other are separated. The approach for the trajectory recovery is based on graph theory to find the optimal path in the graph that has the best representation of the trajectory. An undirected graph model consisting of one or more strokes is constructed from a polyline skeleton. By using the polyline skeleton, our approach accelerates the process to search for an optimal path. In order to evaluate the performance, we built our own dataset, which includes testing and ground-truth. The dataset consist of thousands of handwritten characters and word images, which are extracted from five handwritten documents. To show the relative advantage of our skeletonization method, we first compare the results against those from Zhang-Suen, a state-of-the-art skeletonization method. For the trajectory recovery, we conduct a comparison using the Root Means Square Error (RMSE) and Dynamic Time Warping (DTW) in order to measure the error between the ground truth and the real output. The comparison reveals that our approach has better performance for both the skeletonization stage and the trajectory recovery stage. Moreover, the processing time comparison proves that our system is faster than the existing systems.

Estimation of Drone Velocity with Sum of Absolute Difference between Multiple Frames (다중 프레임의 SAD를 이용한 드론 속도 측정)

  • Nam, Donho;Yeom, Seokwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.171-176
    • /
    • 2019
  • Drones are highly utilized because they can efficiently acquire long-distance videos. In drone operation, the speed, which is the magnitude of the velocity, can be set, but the moving direction cannot be set, so accurate information about the drone's movement should be estimated. In this paper, we estimate the velocity of the drone moving at a constant speed and direction. In order to estimate the drone's velocity, the displacement of the target frame to minimize the sum of absolute difference (SAD) of the reference frame and the target frame is obtained. The ground truth of the drone's velocity is calculated using the position of a certain matching point over all frames. In the experiments, a video was obtained from the drone moving at a constant speed at a height of 150 meters. The root mean squared error (RMSE) of the estimated velocities in x and y directions and the RMSE of the speed were obtained showing the reliability of the proposed method.

Expanded Object Localization Learning Data Generation Using CAM and Selective Search and Its Retraining to Improve WSOL Performance (CAM과 Selective Search를 이용한 확장된 객체 지역화 학습데이터 생성 및 이의 재학습을 통한 WSOL 성능 개선)

  • Go, Sooyeon;Choi, Yeongwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.349-358
    • /
    • 2021
  • Recently, a method of finding the attention area or localization area for an object of an image using CAM (Class Activation Map)[1] has been variously carried out as a study of WSOL (Weakly Supervised Object Localization). The attention area extraction from the object heat map using CAM has a disadvantage in that it cannot find the entire area of the object by focusing mainly on the part where the features are most concentrated in the object. To improve this, using CAM and Selective Search[6] together, we first expand the attention area in the heat map, and a Gaussian smoothing is applied to the extended area to generate retraining data. Finally we train the data to expand the attention area of the objects. The proposed method requires retraining only once, and the search time to find an localization area is greatly reduced since the selective search is not needed in this stage. Through the experiment, the attention area was expanded from the existing CAM heat maps, and in the calculation of IOU (Intersection of Union) with the ground truth for the bounding box of the expanded attention area, about 58% was improved compared to the existing CAM.