• Title/Summary/Keyword: ResNeXt101

Search Result 3, Processing Time 0.015 seconds

Oriented object detection in satellite images using convolutional neural network based on ResNeXt

  • Asep Haryono;Grafika Jati;Wisnu Jatmiko
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.307-322
    • /
    • 2024
  • Most object detection methods use a horizontal bounding box that causes problems between adjacent objects with arbitrary directions, resulting in misaligned detection. Hence, the horizontal anchor should be replaced by a rotating anchor to determine oriented bounding boxes. A two-stage process of delineating a horizontal bounding box and then converting it into an oriented bounding box is inefficient. To improve detection, a box-boundary-aware vector can be estimated based on a convolutional neural network. Specifically, we propose a ResNeXt101 encoder to overcome the weaknesses of the conventional ResNet, which is less effective as the network depth and complexity increase. Owing to the cardinality of using a homogeneous design and multi-branch architecture with few hyperparameters, ResNeXt captures better information than ResNet. Experimental results demonstrate more accurate and faster oriented object detection of our proposal compared with a baseline, achieving a mean average precision of 89.41% and inference rate of 23.67 fps.

Application of Deep Learning-Based Nuclear Medicine Lung Study Classification Model (딥러닝 기반의 핵의학 폐검사 분류 모델 적용)

  • Jeong, Eui-Hwan;Oh, Joo-Young;Lee, Ju-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.45 no.1
    • /
    • pp.41-47
    • /
    • 2022
  • The purpose of this study is to apply a deep learning model that can distinguish lung perfusion and lung ventilation images in nuclear medicine, and to evaluate the image classification ability. Image data pre-processing was performed in the following order: image matrix size adjustment, min-max normalization, image center position adjustment, train/validation/test data set classification, and data augmentation. The convolutional neural network(CNN) structures of VGG-16, ResNet-18, Inception-ResNet-v2, and SE-ResNeXt-101 were used. For classification model evaluation, performance evaluation index of classification model, class activation map(CAM), and statistical image evaluation method were applied. As for the performance evaluation index of the classification model, SE-ResNeXt-101 and Inception-ResNet-v2 showed the highest performance with the same results. As a result of CAM, cardiac and right lung regions were highly activated in lung perfusion, and upper lung and neck regions were highly activated in lung ventilation. Statistical image evaluation showed a meaningful difference between SE-ResNeXt-101 and Inception-ResNet-v2. As a result of the study, the applicability of the CNN model for lung scintigraphy classification was confirmed. In the future, it is expected that it will be used as basic data for research on new artificial intelligence models and will help stable image management in clinical practice.

Assessing Stream Vegetation Dynamics and Revetment Impact Using Time-Series RGB UAV Images and ResNeXt101 CNNs

  • Seung-Hwan Go;Kyeong-Soo Jeong;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • Small streams, despite their rich ecosystems, face challenges in vegetation assessment due to the limitations of traditional, time-consuming methods. This study presents a groundbreaking approach, combining unmanned aerial vehicles(UAVs), convolutional neural networks(CNNs), and the vegetation differential vegetation index (VDVI), to revolutionize both assessment and management of stream vegetation. Focusing on Idong Stream in South Korea (2.7 km long, 2.34 km2 basin area)with eight diverse revetment methods, we leveraged high-resolution RGB images captured by UAVs across five dates (July-December). These images trained a ResNeXt101 CNN model, achieving an impressive 89% accuracy in classifying vegetation cover(soil,water, and vegetation). This enabled detailed spatial and temporal analysis of vegetation distribution. Further, VDVI calculations on classified vegetation areas allowed assessment of vegetation vitality. Our key findings showcase the power of this approach:(a) TheCNN model generated highly accurate cover maps, facilitating precise monitoring of vegetation changes overtime and space. (b) August displayed the highest average VDVI(0.24), indicating peak vegetation growth crucial for stabilizing streambanks and resisting flow. (c) Different revetment methods impacted vegetation vitality. Fieldstone sections exhibited initial high vitality followed by decline due to leaf browning. Block-type sections and the control group showed a gradual decline after peak growth. Interestingly, the "H environment block" exhibited minimal change, suggesting potential benefits for specific ecological functions.(d) Despite initial differences, all sections converged in vegetation distribution trends after 15 years due to the influence of surrounding vegetation. This study demonstrates the immense potential of UAV-based remote sensing and CNNs for revolutionizing small-stream vegetation assessment and management. By providing high-resolution, temporally detailed data, this approach offers distinct advantages over traditional methods, ultimately benefiting both the environment and surrounding communities through informed decision-making for improved stream health and ecological conservation.