• Title/Summary/Keyword: 픽셀분류

Search Result 164, Processing Time 0.022 seconds

Semantic Image Segmentation Combining Image-level and Pixel-level Classification (영상수준과 픽셀수준 분류를 결합한 영상 의미분할)

  • Kim, Seon Kuk;Lee, Chil Woo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1425-1430
    • /
    • 2018
  • In this paper, we propose a CNN based deep learning algorithm for semantic segmentation of images. In order to improve the accuracy of semantic segmentation, we combined pixel level object classification and image level object classification. The image level object classification is used to accurately detect the characteristics of an image, and the pixel level object classification is used to indicate which object area is included in each pixel. The proposed network structure consists of three parts in total. A part for extracting the features of the image, a part for outputting the final result in the resolution size of the original image, and a part for performing the image level object classification. Loss functions exist for image level and pixel level classification, respectively. Image-level object classification uses KL-Divergence and pixel level object classification uses cross-entropy. In addition, it combines the layer of the resolution of the network extracting the features and the network of the resolution to secure the position information of the lost feature and the information of the boundary of the object due to the pooling operation.

Cataract Extraction and Analysis of Pet Image by Using Enhanced FCM (개선된 FCM 기법을 이용한 애견 영상에서의 백내장 추출 및 분석)

  • Lee, Jae-min;Kim, Min-Seok;Yu, Seung-Won;Lee, Hae-Ill;Kim, Kwang Beak
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.524-526
    • /
    • 2016
  • 본 논문에서는 클러스터의 개수를 다양하게 설정하여 누적된 변화율이 가장 작은 변화율을 가진 클러스터 개수를 동적으로 설정하는 방법을 제안하여 백내장 영역을 추출하는 방법을 제안한다. 제안된 백내장 추출 방법은 애견 안구 영상에서 관심 영역을 ROI 영역으로 설정한다. 설정된 ROI 영역에 퍼지 스트레칭 기법을 적용하여 픽셀의 상한 값과 하한 값을 조정한다. 퍼지 스트레칭 기법이 적용된 ROI 영역에서 FCM 알고리즘을 적용하여 클러스터 소속 행렬의 중심 값의 변화율이 가장 적은 개수를 최적의 클러스터 개수로 설정하여 ROI 영역을 양자화 한다. 양자화된 ROI 영역에서 침식 및 팽창 기법을 적용하고 ROI 영역의 면적을 기준으로 1/5보다 적은 면적을 가진 객체를 잡음으로 간주하여 제거한다. 잡음이 제거된 ROI 영역에서 ROI 면적의 3/5이상인 영역을 가진 안구 객체를 백내장 영역으로 추출한다. 제안된 방법의 효율성을 분석하기 위해서 애견 안구 영상을 대상으로 실험한 결과, 기존의 FCM을 적용하여 ROI 영역을 양자화 하는 처리 시간보다 제안된 클러스터 수 설정 기반 FCM을 적용한 양자화 방법이 처리 시간이 적게 소요되고 객체들을 정확히 분류하는 것을 실험을 통하여 확인하였다.

  • PDF

Problems of Implant Procedure and Medical Disputes (임플란트 시술의 문제점과 의료분쟁)

  • Lee, Tae-Hui;Song, Young-Ji
    • The Korean Society of Law and Medicine
    • /
    • v.17 no.1
    • /
    • pp.281-297
    • /
    • 2016
  • In order to make a treatment plan and outcome prediction, it is important to evaluate accurately and objectively osseous tissues of the implant area. The evaluation of osseous tissues is the most objective method for the decision of production time of upper structure of alveolar bone. However, the evaluation of osseous tissues contains contradiction because it is made by subjective opinions of dental surgeons. Many dentists also point out the problem of subjective evaluation of osseous tissues. Therefore, it is necessary to create accurate and objective standards. Previously, the evaluation of bone density depends on dentist's subjective sensation during drilling procedure of implant. However, the HU(Hounsfield unit) figure of CT(computed tomography) scan allows of objective and precise categorization of bone density now. Misch and Kircos divided the bone density levels from D1 to D5 with subjective separation of bone density. Their method also depended on not objective and quantification data but subjective separation by sensation. Thus, we need the evaluation of implant area through comparative analysis of more objective and quantification data. Implant treatment comprises the highest frequency of medical disputes of dental clinic. If we bring objective checkup and reasonable treatment method in the implant treatment, we can deduce more reasonable results, and the failure late of implant treatment also can decrease. The ultimate objective of this study is the minimization of dental disputes between dental patients and dentists by creating new legal standards on the basis of objective and quantification data.

  • PDF

The Performance Improvement of U-Net Model for Landcover Semantic Segmentation through Data Augmentation (데이터 확장을 통한 토지피복분류 U-Net 모델의 성능 개선)

  • Baek, Won-Kyung;Lee, Moung-Jin;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1663-1676
    • /
    • 2022
  • Recently, a number of deep-learning based land cover segmentation studies have been introduced. Some studies denoted that the performance of land cover segmentation deteriorated due to insufficient training data. In this study, we verified the improvement of land cover segmentation performance through data augmentation. U-Net was implemented for the segmentation model. And 2020 satellite-derived landcover dataset was utilized for the study data. The pixel accuracies were 0.905 and 0.923 for U-Net trained by original and augmented data respectively. And the mean F1 scores of those models were 0.720 and 0.775 respectively, indicating the better performance of data augmentation. In addition, F1 scores for building, road, paddy field, upland field, forest, and unclassified area class were 0.770, 0.568, 0.433, 0.455, 0.964, and 0.830 for the U-Net trained by original data. It is verified that data augmentation is effective in that the F1 scores of every class were improved to 0.838, 0.660, 0.791, 0.530, 0.969, and 0.860 respectively. Although, we applied data augmentation without considering class balances, we find that data augmentation can mitigate biased segmentation performance caused by data imbalance problems from the comparisons between the performances of two models. It is expected that this study would help to prove the importance and effectiveness of data augmentation in various image processing fields.

A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

  • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.931-946
    • /
    • 2017
  • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.

Analysis on Topographic Normalization Methods for 2019 Gangneung-East Sea Wildfire Area Using PlanetScope Imagery (2019 강릉-동해 산불 피해 지역에 대한 PlanetScope 영상을 이용한 지형 정규화 기법 분석)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.179-197
    • /
    • 2020
  • Topographic normalization reduces the terrain effects on reflectance by adjusting the brightness values of the image pixels to be equal if the pixels cover the same land-cover. Topographic effects are induced by the imaging conditions and tend to be large in high mountainousregions. Therefore, image analysis on mountainous terrain such as estimation of wildfire damage assessment requires appropriate topographic normalization techniques to yield accurate image processing results. However, most of the previous studies focused on the evaluation of topographic normalization on satellite images with moderate-low spatial resolution. Thus, the alleviation of topographic effects on multi-temporal high-resolution images was not dealt enough. In this study, the evaluation of terrain normalization was performed for each band to select the optimal technical combinations for rapid and accurate wildfire damage assessment using PlanetScope images. PlanetScope has considerable potential in the disaster management field as it satisfies the rapid image acquisition by providing the 3 m resolution daily image with global coverage. For comparison of topographic normalization techniques, seven widely used methods were employed on both pre-fire and post-fire images. The analysis on bi-temporal images suggests the optimal combination of techniques which can be applied on images with different land-cover composition. Then, the vegetation index was calculated from the images after the topographic normalization with the proposed method. The wildfire damage detection results were obtained by thresholding the index and showed improvementsin detection accuracy for both object-based and pixel-based image analysis. In addition, the burn severity map was constructed to verify the effects oftopographic correction on a continuous distribution of brightness values.

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Detection of Wildfire Smoke Plumes Using GEMS Images and Machine Learning (GEMS 영상과 기계학습을 이용한 산불 연기 탐지)

  • Jeong, Yemin;Kim, Seoyeon;Kim, Seung-Yeon;Yu, Jeong-Ah;Lee, Dong-Won;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.967-977
    • /
    • 2022
  • The occurrence and intensity of wildfires are increasing with climate change. Emissions from forest fire smoke are recognized as one of the major causes affecting air quality and the greenhouse effect. The use of satellite product and machine learning is essential for detection of forest fire smoke. Until now, research on forest fire smoke detection has had difficulties due to difficulties in cloud identification and vague standards of boundaries. The purpose of this study is to detect forest fire smoke using Level 1 and Level 2 data of Geostationary Environment Monitoring Spectrometer (GEMS), a Korean environmental satellite sensor, and machine learning. In March 2022, the forest fire in Gangwon-do was selected as a case. Smoke pixel classification modeling was performed by producing wildfire smoke label images and inputting GEMS Level 1 and Level 2 data to the random forest model. In the trained model, the importance of input variables is Aerosol Optical Depth (AOD), 380 nm and 340 nm radiance difference, Ultra-Violet Aerosol Index (UVAI), Visible Aerosol Index (VisAI), Single Scattering Albedo (SSA), formaldehyde (HCHO), nitrogen dioxide (NO2), 380 nm radiance, and 340 nm radiance were shown in that order. In addition, in the estimation of the forest fire smoke probability (0 ≤ p ≤ 1) for 2,704 pixels, Mean Bias Error (MBE) is -0.002, Mean Absolute Error (MAE) is 0.026, Root Mean Square Error (RMSE) is 0.087, and Correlation Coefficient (CC) showed an accuracy of 0.981.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Motion Recognitions Based on Local Basis Images Using Independent Component Analysis (독립성분분석을 이용한 국부기저영상 기반 동작인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.617-623
    • /
    • 2008
  • This paper presents a human motion recognition method using both centroid shift and local basis images. The centroid shift based on 1st moment balance technique is applied to get the robust motion images against position or size changes, the extraction of local basis images based on independent component analysis(ICA) is also applied to find a set of statistically independent motion features, which is included in each motions. Especially, ICA of fixed-point(FP) algorithm based on Newton method is used for being quick to extract a local basis images of motions. The proposed method has been applied to the problem for recognizing the 160(1 person * 10 animals * 16 motions) sign language motion images of 240*215 pixels. The 3 distances such as city-block, Euclidean, negative angle are used as measures when match the probe images to the nearest gallery images. The experimental results show that the proposed method has a superior recognition performances(speed, rate) than the method using local eigen images and the method using local basis images without centroid shift respectively.