• Title/Summary/Keyword: pixel based classification

Search Result 173, Processing Time 0.022 seconds

Deep Learning Models for Fabric Image Defect Detection: Experiments with Transformer-based Image Segmentation Models (직물 이미지 결함 탐지를 위한 딥러닝 기술 연구: 트랜스포머 기반 이미지 세그멘테이션 모델 실험)

  • Lee, Hyun Sang;Ha, Sung Ho;Oh, Se Hwan
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.149-162
    • /
    • 2023
  • Purpose In the textile industry, fabric defects significantly impact product quality and consumer satisfaction. This research seeks to enhance defect detection by developing a transformer-based deep learning image segmentation model for learning high-dimensional image features, overcoming the limitations of traditional image classification methods. Design/methodology/approach This study utilizes the ZJU-Leaper dataset to develop a model for detecting defects in fabrics. The ZJU-Leaper dataset includes defects such as presses, stains, warps, and scratches across various fabric patterns. The dataset was built using the defect labeling and image files from ZJU-Leaper, and experiments were conducted with deep learning image segmentation models including Deeplabv3, SegformerB0, SegformerB1, and Dinov2. Findings The experimental results of this study indicate that the SegformerB1 model achieved the highest performance with an mIOU of 83.61% and a Pixel F1 Score of 81.84%. The SegformerB1 model excelled in sensitivity for detecting fabric defect areas compared to other models. Detailed analysis of its inferences showed accurate predictions of diverse defects, such as stains and fine scratches, within intricated fabric designs.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

Deep Learning-based Hyperspectral Image Classification with Application to Environmental Geographic Information Systems (딥러닝 기반의 초분광영상 분류를 사용한 환경공간정보시스템 활용)

  • Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1061-1073
    • /
    • 2017
  • In this study, images were classified using convolutional neural network (CNN) - a deep learning technique - to investigate the feasibility of information production through a combination of artificial intelligence and spatial data. CNN determines kernel attributes based on a classification criterion and extracts information from feature maps to classify each pixel. In this study, a CNN network was constructed to classify materials with similar spectral characteristics and attribute information; this is difficult to achieve by conventional image processing techniques. A Compact Airborne Spectrographic Imager(CASI) and an Airborne Imaging Spectrometer for Application (AISA) were used on the following three study sites to test this method: Site 1, Site 2, and Site 3. Site 1 and Site 2 were agricultural lands covered in various crops,such as potato, onion, and rice. Site 3 included different buildings,such as single and joint residential facilities. Results indicated that the classification of crop species at Site 1 and Site 2 using this method yielded accuracies of 96% and 99%, respectively. At Site 3, the designation of buildings according to their purpose yielded an accuracy of 96%. Using a combination of existing land cover maps and spatial data, we propose a thematic environmental map that provides seasonal crop types and facilitates the creation of a land cover map.

Vegetation Cover Type Mapping Over The Korean Peninsula Using Multitemporal AVHRR Data (시계열(時系列) AVHRR 위성자료(衛星資料)를 이용한 한반도 식생분포(植生分布) 구분(區分))

  • Lee, Kyu-Sung
    • Journal of Korean Society of Forest Science
    • /
    • v.83 no.4
    • /
    • pp.441-449
    • /
    • 1994
  • The two reflective channels(red and near infrared spectrum) of advanced very high resolution radiometer(AVHRR) data were used to classify primary vegetation cover types in the Korean Peninsula. From the NOAA-11 satellite data archive of 1991, 27 daytime scenes of relatively minimum cloud coverage were obtained. After the initial radiometric calibration, normalized difference vegetation index(NDVI) was calculated for each of the 27 data sets. Four or five daily NDVI data were then overlaid for each of the six months starting from February to November and the maximum value of NDVI was retained for every pixel location to make a monthly composite. The six bands of monthly NDVI composite were nearly cloud free and used for the computer classification of vegetation cover. Based on the temporal signatures of different vegetation cover types, which were generated by an unsupervised block clustering algorithm, every pixel was classified into one of the six cover type categories. The classification result was evaluated by both qualitative interpretation and quantitative comparison with existing forest statistics. Considering frequent data acquisition, low data cost and volume, and large area coverage, it is believed that AVHRR data are effective for vegetation cover type mapping at regional scale.

  • PDF

Estimation of Classification Accuracy of JERS-1 Satellite Imagery according to the Acquisition Method and Size of Training Reference Data (훈련지역의 취득방법 및 규모에 따른 JERS-1위성영상의 토지피복분류 정확도 평가)

  • Ha, Sung-Ryong;Kyoung, Chon-Ku;Park, Sang-Young;Park, Dae-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.1
    • /
    • pp.27-37
    • /
    • 2002
  • The classification accuracy of land cover has been considered as one of the major issues to estimate pollution loads generated from diffuse landuse patterns in a watershed. This research aimed to assess the effects of the acquisition methods and sampling size of training reference data on the classification accuracy of land cover using an imagery acquired by optical sensor(OPS) on JERS-1. Two kinds of data acquisition methods were considered to prepare training data. The first was to assign a certain land cover type to a specific pixel based on the researchers subjective discriminating capacity about current land use and the second was attributed to an aerial photograph incorporated with digital maps with GIS. Three different sizes of samples, 0.3%, 0.5%, and 1.0% of all pixels, were applied to examine the consistency of the classified land cover with the training data of corresponding pixels. Maximum likelihood scheme was applied to classify the land use patterns of JERS-1 imagery. Classification run applying an aerial photograph achieved 18 % higher consistency with the training data than the run applying the researchers subjective discriminating capacity. Regarding the sample size, it was proposed that the size of training area should be selected at least over 1% of all of the pixels in the study area in order to obtain the accuracy with 95% for JERS-1 satellite imagery on a typical small-to-medium-size urbanized area.

  • PDF

Classification of Forest Vertical Structure Using Machine Learning Analysis (머신러닝 기법을 이용한 산림의 층위구조 분류)

  • Kwon, Soo-Kyung;Lee, Yong-Suk;Kim, Dae-Seong;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.229-239
    • /
    • 2019
  • All vegetation colonies have layered structure. This layer is called 'forest vertical structure.' Nowadays it is considered as an important indicator to estimate forest's vital condition, diversity and environmental effect of forest. So forest vertical structure should be surveyed. However, vertical structure is a kind of inner structure, so forest surveys are generally conducted through field surveys, a traditional forest inventory method which costs plenty of time and budget. Therefore, in this study, we propose a useful method to classify the vertical structure of forests using remote sensing aerial photographs and machine learning capable of mass data mining in order to reduce time and budget for forest vertical structure investigation. We classified it as SVM (Support Vector Machine) using RGB airborne photos and LiDAR (Light Detection and Ranging) DSM (Digital Surface Model) DTM (Digital Terrain Model). Accuracy based on pixel count is 66.22% when compared to field survey results. It is concluded that classification accuracy of layer classification is relatively high for single-layer and multi-layer classification, but it was concluded that it is difficult in multi-layer classification. The results of this study are expected to further develop the field of machine learning research on vegetation structure by collecting various vegetation data and image data in the future.

New Hybrid Approach of CNN and RNN based on Encoder and Decoder (인코더와 디코더에 기반한 합성곱 신경망과 순환 신경망의 새로운 하이브리드 접근법)

  • Jongwoo Woo;Gunwoo Kim;Keunho Choi
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.129-143
    • /
    • 2023
  • In the era of big data, the field of artificial intelligence is showing remarkable growth, and in particular, the image classification learning methods by deep learning are becoming an important area. Various studies have been actively conducted to further improve the performance of CNNs, which have been widely used in image classification, among which a representative method is the Convolutional Recurrent Neural Network (CRNN) algorithm. The CRNN algorithm consists of a combination of CNN for image classification and RNNs for recognizing time series elements. However, since the inputs used in the RNN area of CRNN are the flatten values extracted by applying the convolution and pooling technique to the image, pixel values in the same phase in the image appear in different order. And this makes it difficult to properly learn the sequence of arrangements in the image intended by the RNN. Therefore, this study aims to improve image classification performance by proposing a novel hybrid method of CNN and RNN applying the concepts of encoder and decoder. In this study, the effectiveness of the new hybrid method was verified through various experiments. This study has academic implications in that it broadens the applicability of encoder and decoder concepts, and the proposed method has advantages in terms of model learning time and infrastructure construction costs as it does not significantly increase complexity compared to conventional hybrid methods. In addition, this study has practical implications in that it presents the possibility of improving the quality of services provided in various fields that require accurate image classification.

Determination of Tumor Boundaries on CT Images Using Unsupervised Clustering Algorithm (비교사적 군집화 알고리즘을 이용한 전산화 단층영상의 병소부위 결정에 관한 연구)

  • Lee, Kyung-Hoo;Ji, Young-Hoon;Lee, Dong-Han;Yoo, Seoung-Yul;Cho, Chul-Koo;Kim, Mi-Sook;Yoo, Hyung-Jun;Kwon, Soo-Il;Chun, Jun-Chul
    • Journal of Radiation Protection and Research
    • /
    • v.26 no.2
    • /
    • pp.59-66
    • /
    • 2001
  • It is a hot issue to determine the spatial location and shape of tumor boundary in fractionated stereotactic radiotherapy (FSRT). We could get consecutive transaxial plane images from the phantom (paraffin) and 4 patients with brain tumor using helical computed tomography(HCT). K-means classification algorithm was adjusted to change raw data pixel value in CT images into classified average pixel value. The classified images consists of 5 regions that ate tumor region (TR), normal region (NR), combination region (CR), uncommitted region (UR) and artifact region (AR). The major concern was how to separate the normal region from tumor region in the combination area. Relative average deviation analysis was adjusted to alter average pixel values of 5 regions into 2 regions of normal and tumor region to define maximum point among average deviation pixel values. And then we drawn gross tumor volume (GTV) boundary by connecting maximum points in images using semi-automatic contour method by IDL(Interactive Data Language) program. The error limit of the ROI boundary in homogeneous phantom is estimated within ${\pm}1%$. In case of 4 patients, we could confirm that the tumor lesions described by physician and the lesions described automatically by the K-mean classification algorithm and relative average deviation analyses were similar. These methods can make uncertain boundary between normal and tumor region into clear boundary. Therefore it will be useful in the CT images-based treatment planning especially to use above procedure apply prescribed method when CT images intermittently fail to visualize tumor volume comparing to MRI images.

  • PDF

Image Pattern Classification and Recognition by Using the Associative Memory with Cellular Neural Networks (셀룰라 신경회로망의 연상메모리를 이용한 영상 패턴의 분류 및 인식방법)

  • Shin, Yoon-Cheol;Park, Yong-Hun;Kang, Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.154-162
    • /
    • 2003
  • In this paper, Associative Memory with Cellular Neural Networks classifies and recognizes image patterns as an operator applied to image process. CNN processes nonlinear data in real-time like neural networks, and made by cell which communicates with each other directly through its neighbor cells as the Cellular Automata does. It is applied to the optimization problem, associative memory, pattern recognition, and computer vision. Image processing with CNN is appropriate to 2-D images, because each cell which corresponds to each pixel in the image is simultaneously processed in parallel. This paper shows the method for designing the structure of associative memory based on CNN and getting output image by choosing the most appropriate weight pattern among the whole learned weight pattern memories. Each template represents weight values between cells and updates them by learning. Hebbian rule is used for learning template weights and LMS algorithm is used for classification.

Comparison between in situ Survey and Satellite Imagery with Regard to Coastal Habitat Distribution Patterns in Weno, Micronesia (마이크로네시아 웨노섬 연안 서식지 분포의 현장조사와 위성영상 분석법 비교)

  • Kim, Taihun;Choi, Young-Ung;Choi, Jong-Kuk;Kwon, Moon-Sang;Park, Heung-Sik
    • Ocean and Polar Research
    • /
    • v.35 no.4
    • /
    • pp.395-405
    • /
    • 2013
  • The aim of this study is to suggest an optimal survey method for coastal habitat monitoring around Weno Island in Chuuk Atoll, Federated States of Micronesia (FSM). This study was carried out to compare and analyze differences between in situ survey (PHOTS) and high spatial satellite imagery (Worldview-2) with regard to the coastal habitat distribution patterns of Weno Island. The in situ field data showed the following coverage of habitat types: sand 42.4%, seagrass 26.1%, algae 14.9%, rubble 8.9%, hard coral 3.5%, soft coral 2.6%, dead coral 1.5%, others 0.1%. The satellite imagery showed the following coverage of habitat types: sand 26.5%, seagrass 23.3%, sand + seagrass 12.3%, coral 18.1%, rubble 19.0%, rock 0.8% (Accuracy 65.2%). According to the visual interpretation of the habitat map by in situ survey, seagrass, sand, coral and rubble distribution were misaligned compared with the satellite imagery. While, the satellite imagery appear to be a plausible results to identify habitat types, it could not classify habitat types under one pixel in images, which in turn overestimated coral and rubble coverage, underestimated algae and sand. The differences appear to arise primarily because of habitat classification scheme, sampling scale and remote sensing reflectance. The implication of these results is that satellite imagery analysis needs to incorporate in situ survey data to accurately identify habitat. We suggest that satellite imagery must correspond with in situ survey in habitat classification and sampling scale. Subsequently habitat sub-segmentation based on the in situ survey data should be applied to satellite imagery.