• Title/Summary/Keyword: pixel based classification

Search Result 173, Processing Time 0.027 seconds

Plants Disease Phenotyping using Quinary Patterns as Texture Descriptor

  • Ahmad, Wakeel;Shah, S.M. Adnan;Irtaza, Aun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3312-3327
    • /
    • 2020
  • Plant diseases are a significant yield and quality constraint for farmers around the world due to their severe impact on agricultural productivity. Such losses can have a substantial impact on the economy which causes a reduction in farmer's income and higher prices for consumers. Further, it may also result in a severe shortage of food ensuing violent hunger and starvation, especially, in less-developed countries where access to disease prevention methods is limited. This research presents an investigation of Directional Local Quinary Patterns (DLQP) as a feature descriptor for plants leaf disease detection and Support Vector Machine (SVM) as a classifier. The DLQP as a feature descriptor is specifically the first time being used for disease detection in horticulture. DLQP provides directional edge information attending the reference pixel with its neighboring pixel value by involving computation of their grey-level difference based on quinary value (-2, -1, 0, 1, 2) in 0°, 45°, 90°, and 135° directions of selected window of plant leaf image. To assess the robustness of DLQP as a texture descriptor we used a research-oriented Plant Village dataset of Tomato plant (3,900 leaf images) comprising of 6 diseased classes, Potato plant (1,526 leaf images) and Apple plant (2,600 leaf images) comprising of 3 diseased classes. The accuracies of 95.6%, 96.2% and 97.8% for the above-mentioned crops, respectively, were achieved which are higher in comparison with classification on the same dataset using other standard feature descriptors like Local Binary Pattern (LBP) and Local Ternary Patterns (LTP). Further, the effectiveness of the proposed method is proven by comparing it with existing algorithms for plant disease phenotyping.

Content Adaptive Interpolation for Intra-field Deinterlacting (공간적 디인터레이싱을 위한 컨텐츠 기반 적응적 보간 기법)

  • Kim, Won-Ki;Jin, Soon-Jong;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.1000-1009
    • /
    • 2007
  • This paper presents a content adaptive interpolation (CAI) for intra deinterlacing. The CAI consists of three steps: pre-processing, content classification, and adaptive interpolation. There are also three main interpolation methods in our proposed CAI, i.e. modified edge-based line averaging (M-ELA), gradient directed interpolation (GDI), and window matching method (WMM). Each proposed method shows different performances according to spatial local features. Therefore, we analyze the local region feature using the gradient detection and classify each missing pixel into four categories. And then, based on the classification result, a different do-interlacing algorithm is activated in order to obtain the best performance. Experimental results demonstrate that the CAI method performs better than previous techniques.

A Study on Atmospheric Correction in Satellite Imagery Using an Atmospheric Radiation Model (대기복사모형을 이용한 위성영상의 대기보정에 관한 연구)

  • Oh, Sung-Nam
    • Atmosphere
    • /
    • v.14 no.2
    • /
    • pp.11-22
    • /
    • 2004
  • A technique on atmospheric correction algorithm to the multi-band reflectance of Landsat TM imagery has been developed using an atmospheric radiation transfer model for eliminating the atmospheric and surface diffusion effects. Despite the fact that the technique of satellite image processing has been continually developed, there is still a difference between the radiance value registered by satellite borne detector and the true value registered at the ground surface. Such difference is caused by atmospheric attenuations of radiance energy transfer process which is mostly associated with the presence of aerosol particles in atmospheric suspension and surface irradiance characteristics. The atmospheric reflectance depend on atmospheric optical depth and aerosol concentration, and closely related to geographical and environmental surface characteristics. Therefore, when the effects of surface diffuse and aerosol reflectance are eliminated from the satellite image, it is actually corrected from atmospheric optical conditions. The objective of this study is to develop an algorithm for making atmospheric correction in satellite image. The study is processed with the correction function which is developed for eliminating the effects of atmospheric path scattering and surface adjacent pixel spectral reflectance within an atmospheric radiation model. The diffused radiance of adjacent pixel in the image obtained from accounting the average reflectance in the $7{\times}7$ neighbourhood pixels and using the land cover classification. The atmospheric correction functions are provided by a radiation transfer model of LOWTRAN 7 based on the actual atmospheric soundings over the Korean atmospheric complexity. The model produce the upward radiances of satellite spectral image for a given surface reflectance and aerosol optical thickness.

Semi-Automated Extraction of Geographic Information using KOMPSAT 2 : Analyzing Image Fusion Methods and Geographic Objected-Based Image Analysis (다목적 실용위성 2호 고해상도 영상을 이용한 지리 정보 추출 기법 - 영상융합과 지리객체 기반 분석을 중심으로 -)

  • Yang, Byung-Yun;Hwang, Chul-Sue
    • Journal of the Korean Geographical Society
    • /
    • v.47 no.2
    • /
    • pp.282-296
    • /
    • 2012
  • This study compared effects of spatial resolution ratio in image fusion by Korea Multi-Purpose SATellite 2 (KOMPSAT II), also known as Arirang-2. Image fusion techniques, also called pansharpening, are required to obtain color imagery with high spatial resolution imagery using panchromatic and multi-spectral images. The higher quality satellite images generated by an image fusion technique enable interpreters to produce better application results. Thus, image fusions categorized in 3 domains were applied to find out significantly improved fused images using KOMPSAT 2. In addition, all fused images were evaluated to satisfy both spectral and spatial quality to investigate an optimum fused image. Additionally, this research compared Pixel-Based Image Analysis (PBIA) with the GEOgraphic Object-Based Image Analysis (GEOBIA) to make better classification results. Specifically, a roof top of building was extracted by both image analysis approaches and was finally evaluated to obtain the best accurate result. This research, therefore, provides the effective use for very high resolution satellite imagery with image interpreter to be used for many applications such as coastal area, urban and regional planning.

  • PDF

EXTRACTING BASE DATA FOR FLOOD ANALYSIS USING HIGH RESOLUTION SATELLITE IMAGERY

  • Sohn, Hong-Gyoo;Kim, Jin-Woo;Lee, Jung-Bin;Song, Yeong-Sun
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.426-429
    • /
    • 2006
  • Flood caused by Typhoon and severe rain during summer is the most destructive natural disasters in Korea. Almost every year flood has resulted in a big lost of national infrastructure and loss of civilian lives. It usually takes time and great efforts to estimate the flood-related damages. Government also has pursued proper standard and tool for using state-of-art technologies. High resolution satellite imagery is one of the most promising sources of ground truth information since it provides detailed and current ground information such as building, road, and bare ground. Once high resolution imagery is utilized, it can greatly reduce the amount of field work and cost for flood related damage assessment. The classification of high resolution image is pre-required step to be utilized for the damage assessment. The classified image combined with additional data such as DEM and DSM can help to estimate the flooded areas per each classified land use. This paper applied object-oriented classification scheme to interpret an image not based in a single pixel but in meaningful image objects and their mutual relations. When comparing it with other classification algorithms, object-oriented classification was very effective and accurate. In this paper, IKONOS image is used, but similar level of high resolution Korean KOMPSAT series can be investigated once they are available.

  • PDF

Vegetation Mapping of Hawaiian Coastal Lowland Using Remotely Sensed Data (원격탐사 자료를 이용한 하와이 해안지역 식생 분류)

  • Park, Sun-Yurp
    • Journal of the Korean association of regional geographers
    • /
    • v.12 no.4
    • /
    • pp.496-507
    • /
    • 2006
  • A hybrid approach integrating both high-resolution and hyperspectral data sets was used to map vegetation cover of a coastal lowland area in the Hawaii Volcanoes National Park. Three common grass species (broomsedge, natal redtop, and pili) and other non-grass species, primarily shrubs, were focused in the study. A 3-step, hybrid approach, combining an unsupervised and a supervised classification schemes, was applied to the vegetation mapping. First, the IKONOS 1-m high-resolution data were classified to create a binary image (vegetated vs. non--vegetated) and converted to 20-meter resolution percent cover vegetation data to match AVIRIS data pixels. Second, the minimum noise fraction (MNF) transformation was used to extract a coherent dimensionality from the original AVIRIS data. Since the grasses and shubs were sparsely distributed and most image pixels were intermingled with lava surfaces, the reflectance component of lava was filtered out with a binary fractional cover analysis assuming that tile total reflectance of a pixel was a linear combination of the reflectance spectra of vegetation and the lava surface. Finally, a supervised approach was used to classify the plant species based on tile maximum likelihood algorithm.

  • PDF

A Study on the Extraction of Knowledge for Image Understanding (영상이해를 위한 지식유출에 관한 연구)

  • 곽윤식;이대영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.5
    • /
    • pp.757-772
    • /
    • 1993
  • This paper describes the knowledge extraction for image understanding in knowledge based system. The current set of low level processes operate on the numerical pixel arrays, to segment the image into region and to convert the image into directional image, and to calculate feature for these regions. The current set of intermedate level processes operate on the results of earlier knowledge source to build more complex representations of the data. We have grouped into thee categories : feature based classification, geometric token relation, perceptual organization and grouping.

  • PDF

Semantic Object Segmentation Using Conditional Generative Adversarial Network with Residual Connections (잔차 연결의 조건부 생성적 적대 신경망을 사용한 시맨틱 객체 분할)

  • Ibrahem, Hatem;Salem, Ahmed;Yagoub, Bilel;Kang, Hyun Su;Suh, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1919-1925
    • /
    • 2022
  • In this paper, we propose an image-to-image translation approach based on the conditional generative adversarial network for semantic segmentation. Semantic segmentation is the task of clustering parts of an image together which belong to the same object class. Unlike the traditional pixel-wise classification approach, the proposed method parses an input RGB image to its corresponding semantic segmentation mask using a pixel regression approach. The proposed method is based on the Pix2Pix image synthesis method. We employ residual connections-based convolutional neural network architectures for both the generator and discriminator architectures, as the residual connections speed up the training process and generate more accurate results. The proposed method has been trained and tested on the NYU-depthV2 dataset and could achieve a good mIOU value (49.5%). We also compare the proposed approach to the current methods in semantic segmentation showing that the proposed method outperforms most of those methods.

Class 1·3 Vehicle Classification Using Deep Learning and Thermal Image (열화상 카메라를 활용한 딥러닝 기반의 1·3종 차량 분류)

  • Jung, Yoo Seok;Jung, Do Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.6
    • /
    • pp.96-106
    • /
    • 2020
  • To solve the limitation of traffic monitoring that occur from embedded sensor such as loop and piezo sensors, the thermal imaging camera was installed on the roadside. As the length of Class 1(passenger car) is getting longer, it is becoming difficult to classify from Class 3(2-axle truck) by using an embedded sensor. The collected images were labeled to generate training data. A total of 17,536 vehicle images (640x480 pixels) training data were produced. CNN (Convolutional Neural Network) was used to achieve vehicle classification based on thermal image. Based on the limited data volume and quality, a classification accuracy of 97.7% was achieved. It shows the possibility of traffic monitoring system based on AI. If more learning data is collected in the future, 12-class classification will be possible. Also, AI-based traffic monitoring will be able to classify not only 12-class, but also new various class such as eco-friendly vehicles, vehicle in violation, motorcycles, etc. Which can be used as statistical data for national policy, research, and industry.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.