• Title/Summary/Keyword: U-Net++

Search Result 704, Processing Time 0.025 seconds

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.

Private Pension Dependency of Korean and U.S. Households (한국과 미국 가계의 사적연금자산 의존도)

  • Yuh, Yoonkyung
    • 한국노년학
    • /
    • v.36 no.3
    • /
    • pp.809-826
    • /
    • 2016
  • This study analyzed private pension dependency of Korea and U.S. households using the most recent dataset of two countries. For this purpose, 2013 Korean Retirement and Income Study(KReIS) of national pension research institute in Korea and 2013 SCF(Survey of Consumer Finances) of FRB in U.S. were used. The private pension dependency was defined as the proportion of private pension wealth among total financial wealth in each household and tobit model was used to investigate determinants of private pension dependency of the two countries. After controlling for other factors, household income and net worth, age, educational attainment, and health status of householder were crucial determinants of private pension dependency for both countries. Householder's age, educational attainment, and health tend to increase the private pension dependency in Korea and U.S. However, household income and net worth affected the private pension dependency opposite direction. The private pension dependency increased with high level of income and net worth in Korea, while it decreased with high level of income and net worth in U.S. Results of this study provide useful implications for future pension system and policy in Korea.

A three-stage deep-learning-based method for crack detection of high-resolution steel box girder image

  • Meng, Shiqiao;Gao, Zhiyuan;Zhou, Ying;He, Bin;Kong, Qingzhao
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.29-39
    • /
    • 2022
  • Crack detection plays an important role in the maintenance and protection of steel box girder of bridges. However, since the cracks only occupy an extremely small region of the high-resolution images captured from actual conditions, the existing methods cannot deal with this kind of image effectively. To solve this problem, this paper proposed a novel three-stage method based on deep learning technology and morphology operations. The training set and test set used in this paper are composed of 360 images (4928 × 3264 pixels) in steel girder box. The first stage of the proposed model converted high-resolution images into sub-images by using patch-based method and located the region of cracks by CBAM ResNet-50 model. The Recall reaches 0.95 on the test set. The second stage of our method uses the Attention U-Net model to get the accurate geometric edges of cracks based on results in the first stage. The IoU of the segmentation model implemented in this stage attains 0.48. In the third stage of the model, we remove the wrong-predicted isolated points in the predicted results through dilate operation and outlier elimination algorithm. The IoU of test set ascends to 0.70 after this stage. Ablation experiments are conducted to optimize the parameters and further promote the accuracy of the proposed method. The result shows that: (1) the best patch size of sub-images is 1024 × 1024. (2) the CBAM ResNet-50 and the Attention U-Net achieved the best results in the first and the second stage, respectively. (3) Pre-training the model of the first two stages can improve the IoU by 2.9%. In general, our method is of great significance for crack detection.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

Liver Segmentation using Multi-dilated U-Net (다중 확장된 컨볼루션 U-Net 을 사용한 간 영역 분할)

  • Sinha, Shrutika;Oh, Kanghan;Boud, Fatima;Jeong, Hwan-Jeong;Oh, Il-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1036-1038
    • /
    • 2020
  • This paper proposes a novel automated liver segmentation using Multi-Dilated U-Nets. The proposed multidilation segmentation model has the advantage of considering both local and global shapes of the liver image. We use the CT images subject-wise, every 2D image is concatenated to 3D to calculate the IOU score and DICE score. The experimental results on Jeonbuk National University hospital dataset achieves better performance than the conventional U-Net.

Deep Learning for Weeds' Growth Point Detection based on U-Net

  • Arsa, Dewa Made Sri;Lee, Jonghoon;Won, Okjae;Kim, Hyongsuk
    • Smart Media Journal
    • /
    • v.11 no.7
    • /
    • pp.94-103
    • /
    • 2022
  • Weeds bring disadvantages to crops since they can damage them, and a clean treatment with less pollution and contamination should be developed. Artificial intelligence gives new hope to agriculture to achieve smart farming. This study delivers an automated weeds growth point detection using deep learning. This study proposes a combination of semantic graphics for generating data annotation and U-Net with pre-trained deep learning as a backbone for locating the growth point of the weeds on the given field scene. The dataset was collected from an actual field. We measured the intersection over union, f1-score, precision, and recall to evaluate our method. Moreover, Mobilenet V2 was chosen as the backbone and compared with Resnet 34. The results showed that the proposed method was accurate enough to detect the growth point and handle the brightness variation. The best performance was achieved by Mobilenet V2 as a backbone with IoU 96.81%, precision 97.77%, recall 98.97%, and f1-score 97.30%.

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

Block Shear Rupture and Shear Lag of Single angle in Tension Joint -Single angle with three or four bolt connection- (단일 ㄱ형강의 블록전단 파단 및 전단지체 현상 -고력볼트 3개 또는 4개로 접합된 단일 ㄱ형강-)

  • Lee, Hyang Ha;Shim, Hyun Ju;Lee, Eun Taik
    • Journal of Korean Society of Steel Construction
    • /
    • v.16 no.5 s.72
    • /
    • pp.565-574
    • /
    • 2004
  • The purpose of this paper was to investigate the block shear and the fracture in the net section, according to AISC Specifications, by analysing the shear lag effect in the block shear rupture of the single angle with three or four bolt connection. Specimen with three or four bolt connections showed that failure generally went from block shear with some net section failures to classic net section failures. From the test results, showed that the connection length, the thickness of angle, and reduction factor, which affect the block shear rupture, were investigated. According to the test results, it is suggested that the calculation of the net section rupture capacity by using the reduction factor of U, that was suggested by Kulak, is needed.

An Automatic Breast Mass Segmentation based on Deep Learning on Mammogram (유방 영상에서 딥러닝 기반의 유방 종괴 자동 분할 연구)

  • Kwon, So Yoon;Kim, Young Jae;Kim, Gwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1363-1369
    • /
    • 2018
  • Breast cancer is one of the most common cancers in women worldwide. In Korea, breast cancer is most common cancer in women followed by thyroid cancer. The purpose of this study is to evaluate the possibility of using deep - run model for segmentation of breast masses and to identify the best deep-run model for breast mass segmentation. In this study, data of patients with breast masses were collected at Asan Medical Center. We used 596 images of mammography and 596 images of gold standard. In the area of interest of the medical image, it was cut into a rectangular shape with a margin of about 10% up and down, and then converted into an 8-bit image by adjusting the window width and level. Also, the size of the image was resampled to $150{\times}150$. In Deconvolution net, the average accuracy is 91.78%. In U-net, the average accuracy is 90.09%. Deconvolution net showed slightly better performance than U-net in this study, so it is expected that deconvolution net will be better for breast mass segmentation. However, because of few cases, there are a few images that are not accurately segmented. Therefore, more research is needed with various training data.

Enhanced CNN Model for Brain Tumor Classification

  • Kasukurthi, Aravinda;Paleti, Lakshmikanth;Brahmaiah, Madamanchi;Sree, Ch.Sudha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.143-148
    • /
    • 2022
  • Brain tumor classification is an important process that allows doctors to plan treatment for patients based on the stages of the tumor. To improve classification performance, various CNN-based architectures are used for brain tumor classification. Existing methods for brain tumor segmentation suffer from overfitting and poor efficiency when dealing with large datasets. The enhanced CNN architecture proposed in this study is based on U-Net for brain tumor segmentation, RefineNet for pattern analysis, and SegNet architecture for brain tumor classification. The brain tumor benchmark dataset was used to evaluate the enhanced CNN model's efficiency. Based on the local and context information of the MRI image, the U-Net provides good segmentation. SegNet selects the most important features for classification while also reducing the trainable parameters. In the classification of brain tumors, the enhanced CNN method outperforms the existing methods. The enhanced CNN model has an accuracy of 96.85 percent, while the existing CNN with transfer learning has an accuracy of 94.82 percent.