• Title/Summary/Keyword: Gray-Level Co-Occurrence Matrix (GLCM)

Search Result 57, Processing Time 0.024 seconds

Analysis of Texture Information of forest stand on High Resolution Satellite Imagery (임분 특성에 따른 고해상도 위성영상의 Texture 정보 분석)

  • 김태근;이규성
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.145-150
    • /
    • 2003
  • 고해상도 위성영상을 이용한 산림의 분석은 기존의 중ㆍ저해상도 영상의 분석과 다른 접근이 필요하다. 본 연구는 임분 특성을 해석하는데 중요한 판독기준인 texture를 이용하여 영상 안에서 임상, 임목직경급, 수관울폐도 등에 따른 Texture 정보를 비교 분석하고자 한다. 울산 일부 산림지역을 대상으로 3개의 가시광선 밴드와 1개의 근적외선 밴드의 1m IKONOS 영상을 이용하여 Texture 정보를 추출하는데 일반적으로 사용되는 통계적인 방법 중에 하나인 GLCM(Gray-Level Co-occurrence matrix)을 통해 Texture 분석을 하였다. 또한 1996년도에 제작된 4차 임상도를 통해 추출된 산림 특성별 Texture 정보를 비교 검토하여 고해상도 위성영상을 활용하여 산림 특성을 해석하는데 최적의 Texture 정보를 제시하고자 하였다. 고해상도 영상에서 나타나는 임분의 특성별 질감정보는 임상, 직경, 임목밀도에 따라 다양하게 나타났다.

  • PDF

Image Retrieval using Interleaved Contour by Declination Difference and Texture (편각 차분에 의한 중첩 윤곽선과 질감을 이용한 영상 검색)

  • Lee, Jeong-Bong;Kim, Hyun-Jong;Park, Chang-Choon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.767-770
    • /
    • 2002
  • 영상 검색의 수행 방법으로 사람의 시각 시스템의 특성을 기반으로 웨이블릿 변환의 고주파수 에너지와 형태학적 필터링을 이용하여 분할된 객체의 효과적인 특징 추출을 통한 계층적인 검색 시스템을 제안한다. 영상 고유의 특징을 얻기 위해 객체의 형태 정보와 질감(texture) 방향성 및 칼라 정보를 이용한다. 본 논문에서는 객체의 형태 정보의 추출을 위하여 사용자의 질의(query)영상에서 객체의 윤곽선의 편각차분 변동율에 의한 형태 특징 벡터를 추출하고 GLCM (Gray Level Co-occurrence Matrix)의 Contrast를 질감 특징으로 추출한다. 이들 두 특징을 이용하여 1차 분류 과정을 거치고 2차 검사에서는 보다 정확한 검색을 수행하기 위하여 1차로 분류된 후보영상들에 대하여 세부 정보인 칼라 정보를 기반으로 유사도를 측정함으로써 유사한 칼라와 형태를 가지는 영상뿐만 아니라 칼라가 다른 유사한 영상에도 효율적인 검색 성능을 보였다.

  • PDF

Image Retrieval Using Shape by Edge Feature and Texture and Color (에지 정보에 의한 형태와 질감 및 칼라 정보를 이용한 영상 검색)

  • 이정봉;이광호;최철;조성민;박장춘
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.234-239
    • /
    • 2002
  • 영상 검색의 수행 방법으로 사람의 시각 시스템의 특성을 기반으로 효과적인 특징 추출 통한 계층적인 내용 기반 검색 시스템을 제안한다. 영상 고유의 특징을 얻기 위해 영상내에 존재하는 형태 정보와 질감 방향성 및 칼라 정보를 이용한다. 본 논문에서는 형태 정보의 추출을 위하여 사용자의 질의 영상에서 에지 특징 정보를 추출하고 부분 영역으로 분할된 영상에서 GLCM(Gray Level Co-occurrence Matrix)의 Contrast를 질감 특징으로 추출한다. 이들 두 특징을 이용하여 1차 분류 과정을 거치고 2차 검사에서는 보다 정확한 검색을 수행하기 위하여 1차로 분류된 후보영상들에 대하여 영상의 세부 정보인 칼라 정보를 기반으로 유사도를 측정함으로써 유사한 칼라와 형태를 가지는 영상뿐만 아니라 칼라가 다른 유사한 영상에도 효율적인 검색 성능을 보였다.

  • PDF

Forensic Image Classification using Data Mining Decision Tree (데이터 마이닝 결정나무를 이용한 포렌식 영상의 분류)

  • RHEE, Kang Hyeon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.49-55
    • /
    • 2016
  • In digital forensic images, there is a serious problem that is distributed with various image types. For the problem solution, this paper proposes a classification algorithm of the forensic image types. The proposed algorithm extracts the 21-dim. feature vector with the contrast and energy from GLCM (Gray Level Co-occurrence Matrix), and the entropy of each image type. The classification test of the forensic images is performed with an exhaustive combination of the image types. Through the experiments, TP (True Positive) and FN (False Negative) is detected respectively. While it is confirmed that performed class evaluation of the proposed algorithm is rated as 'Excellent(A)' because of the AUROC (Area Under Receiver Operating Characteristic Curve) is 0.9980 by the sensitivity and the 1-specificity. Also, the minimum average decision error is 0.1349. Also, at the minimum average decision error is 0.0179, the whole forensic image types which are involved then, our classification effectiveness is high.

Detection of Collapse Buildings Using UAV and Bitemporal Satellite Imagery (UAV와 다시기 위성영상을 이용한 붕괴건물 탐지)

  • Jung, Sejung;Lee, Kirim;Yun, Yerin;Lee, Won Hee;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.187-196
    • /
    • 2020
  • In this study, collapsed building detection using UAV (Unmanned Aerial Vehicle) and PlanetScope satellite images was carried out, suggesting the possibility of utilization of heterogeneous sensors in object detection located on the surface. To this end, the area where about 20 buildings collapsed due to forest fire damage was selected as study site. First of all, the feature information of objects such as ExG (Excess Green), GLCM (Gray-Level Co-Occurrence Matrix), and DSM (Digital Surface Model) were generated using high-resolution UAV images performed object-based segmentation to detect collapsed buildings. The features were then used to detect candidates for collapsed buildings. In this process, a result of the change detection using PlanetScope were used together to improve detection accuracy. More specifically, the changed pixels acquired by the bitemporal PlanetScope images were used as seed pixels to correct the misdetected and overdetected areas in the candidate group of collapsed buildings. The accuracy of the detection results of collapse buildings using only UAV image and the accuracy of collapse building detection result when UAV and PlanetScope images were used together were analyzed through the manually dizitized reference image. As a result, the results using only UAV image had 0.4867 F1-score, and the results using UAV and PlanetScope images together showed that the value improved to 0.8064 F1-score. Moreover, the Kappa coefficiant value was also dramatically improved from 0.3674 to 0.8225.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

Estrus Detection in Sows Based on Texture Analysis of Pudendal Images and Neural Network Analysis

  • Seo, Kwang-Wook;Min, Byung-Ro;Kim, Dong-Woo;Fwa, Yoon-Il;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Biosystems Engineering
    • /
    • v.37 no.4
    • /
    • pp.271-278
    • /
    • 2012
  • Worldwide trends in animal welfare have resulted in an increased interest in individual management of sows housed in groups within hog barns. Estrus detection has been shown to be one of the greatest determinants of sow productivity. Purpose: We conducted this study to develop a method that can automatically detect the estrus state of a sow by selecting optimal texture parameters from images of a sow's pudendum and by optimizing the number of neurons in the hidden layer of an artificial neural network. Methods: Texture parameters were analyzed according to changes in a sow's pudendum in estrus such as mucus secretion and expansion. Of the texture parameters, eight gray level co-occurrence matrix (GLCM) parameters were used for image analysis. The image states were classified into ten grades for each GLCM parameter, and an artificial neural network was formed using the values for each grade as inputs to discriminate the estrus state of sows. The number of hidden layer neurons in the artificial neural network is an important parameter in neural network design. Therefore, we determined the optimal number of hidden layer units using a trial and error method while increasing the number of neurons. Results: Fifteen hidden layers were determined to be optimal for use in the artificial neural network designed in this study. Thirty images of 10 sows were used for learning, and then 30 different images of 10 sows were used for verification. Conclusions: For learning, the back propagation neural network (BPN) algorithm was used to successful estimate six texture parameters (homogeneity, angular second moment, energy, maximum probability, entropy, and GLCM correlation). Based on the verification results, homogeneity was determined to be the most important texture parameter, and resulted in an estrus detection rate of 70%.

Copyright Protection for Fire Video Images using an Effective Watermarking Method (효과적인 워터마킹 기법을 사용한 화재 비디오 영상의 저작권 보호)

  • Nguyen, Truc;Kim, Jong-Myon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.579-588
    • /
    • 2013
  • This paper proposes an effective watermarking approach for copyright protection of fire video images. The proposed watermarking approach efficiently utilizes the inherent characteristics of fire data with respect to color and texture by using a gray level co-occurrence matrix (GLCM) and fuzzy c-means (FCM) clustering. GLCM is used to generate a texture feature dataset by computing energy and homogeneity properties for each candidate fire image block. FCM is used to segment color of the fire image and to select fire texture blocks for embedding watermarks. Each selected block is then decomposed into a one-level wavelet structure with four subbands [LL, LH, HL, HH] using a discrete wavelet transform (DWT), and LH subband coefficients with a gain factor are selected for embedding watermark, where the visibility of the image does not affect. Experimental results show that the proposed watermarking approach achieves about 48 dB of high peak-signal-to-noise ratio (PSNR) and 1.6 to 2.0 of low M-singular value decomposition (M-SVD) values. In addition, the proposed approach outperforms conventional image watermarking approach in terms of normalized correlation (NC) values against several image processing attacks including noise addition, filtering, cropping, and JPEG compression.

Determination of Absorbed Dose for Gafchromic EBT3 Film Using Texture Analysis of Scanning Electron Microscopy Images: A Feasibility Study

  • So-Yeon Park
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.158-163
    • /
    • 2022
  • Purpose: We subjected scanning electron microscopic (SEM) images of the active layer of EBT3 film to texture analysis to determine the dose-response curve. Methods: Uncoated Gafchromic EBT3 films were prepared for direct surface SEM scanning. Absorbed doses of 0-20 Gy were delivered to the film's surface using a 6 MV TrueBeam STx photon beam. The film's surface was scanned using a SEM under 100× and 3,000× magnification. Four textural features (Homogeneity, Correlation, Contrast, and Energy) were calculated based on the gray level co-occurrence matrix (GLCM) using the SEM images corresponding to each dose. We used R-square to evaluate the linear relationship between delivered doses and textural features of the film's surface. Results: Correlation resulted in higher linearity and dose-response curve sensitivity than Homogeneity, Contrast, or Energy. The R-square value was 0.964 for correlation using 3,000× magnified SEM images with 9-pixel offsets. Dose verification was used to determine the difference between the prescribed and measured doses for 0, 5, 10, 15, and 20 Gy as 0.09, 1.96, -2.29, 0.17, and 0.08 Gy, respectively. Conclusions: Texture analysis can be used to accurately convert microscopic structural changes to the EBT3 film's surface into absorbed doses. Our proposed method is feasible and may improve the accuracy of film dosimetry used to protect patients from excess radiation exposure.

The Object Image Detection Method using statistical properties (통계적 특성에 의한 객체 영상 검출방안)

  • Kim, Ji-hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.956-962
    • /
    • 2018
  • As the study of the object feature detection from image, we explain methods to identify the species of the tree in forest using the picture taken from dron. Generally there are three kinds of methods, which are GLCM (Gray Level Co-occurrence Matrix) and Gabor filters, in order to extract the object features. We proposed the object extraction method using the statistical properties of trees in this research because of the similarity of the leaves. After we extract the sample images from the original images, we detect the objects using cross correlation techniques between the original image and sample images. Through this experiment, we realized the mean value and standard deviation of the sample images is very important factor to identify the object. The analysis of the color component of the RGB model and HSV model is also used to identify the object.