• Title/Summary/Keyword: color segmentation

Search Result 544, Processing Time 0.021 seconds

Land Cover Classification Using UAV Imagery and Object-Based Image Analysis - Focusing on the Maseo-myeon, Seocheon-gun, Chungcheongnam-do - (UAV와 객체기반 영상분석 기법을 활용한 토지피복 분류 - 충청남도 서천군 마서면 일원을 대상으로 -)

  • MOON, Ho-Gyeong;LEE, Seon-Mi;CHA, Jae-Gyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2017
  • A land cover map provides basic information to help understand the current state of a region, but its utilization in the ecological research field has deteriorated due to limited temporal and spatial resolutions. The purpose of this study was to investigate the possibility of using a land cover map with data based on high resolution images acquired by UAV. Using the UAV, 10.5 cm orthoimages were obtained from the $2.5km^2$ study area, and land cover maps were obtained from object-based and pixel-based classification for comparison and analysis. From accuracy verification, classification accuracy was shown to be high, with a Kappa of 0.77 for the pixel-based classification and a Kappa of 0.82 for the object-based classification. The overall area ratios were similar, and good classification results were found in grasslands and wetlands. The optimal image segmentation weights for object-based classification were Scale=150, Shape=0.5, Compactness=0.5, and Color=1. Scale was the most influential factor in the weight selection process. Compared with the pixel-based classification, the object-based classification provides results that are easy to read because there is a clear boundary between objects. Compared with the land cover map from the Ministry of Environment (subdivision), it was effective for natural areas (forests, grasslands, wetlands, etc.) but not developed areas (roads, buildings, etc.). The application of an object-based classification method for land cover using UAV images can contribute to the field of ecological research with its advantages of rapidly updated data, good accuracy, and economical efficiency.

A Study on the Deep Neural Network based Recognition Model for Space Debris Vision Tracking System (심층신경망 기반 우주파편 영상 추적시스템 인식모델에 대한 연구)

  • Lim, Seongmin;Kim, Jin-Hyung;Choi, Won-Sub;Kim, Hae-Dong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.9
    • /
    • pp.794-806
    • /
    • 2017
  • It is essential to protect the national space assets and space environment safely as a space development country from the continuously increasing space debris. And Active Debris Removal(ADR) is the most active way to solve this problem. In this paper, we studied the Artificial Neural Network(ANN) for a stable recognition model of vision-based space debris tracking system. We obtained the simulated image of the space environment by the KARICAT which is the ground-based space debris clearing satellite testbed developed by the Korea Aerospace Research Institute, and created the vector which encodes structure and color-based features of each object after image segmentation by depth discontinuity. The Feature Vector consists of 3D surface area, principle vector of point cloud, 2D shape and color information. We designed artificial neural network model based on the separated Feature Vector. In order to improve the performance of the artificial neural network, the model is divided according to the categories of the input feature vectors, and the ensemble technique is applied to each model. As a result, we confirmed the performance improvement of recognition model by ensemble technique.

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

Correction Algorithm of Errors by Seagrasses in Coastal Bathymetry Surveying Using Drone and HD Camera (드론과 HD 카메라를 이용한 수심측량시 잘피에 의한 오차제거 알고리즘)

  • Kim, Gyeongyeop;Choi, Gunhwan;Ahn, Kyungmo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.6
    • /
    • pp.553-560
    • /
    • 2020
  • This paper presents an algorithm for identifying and eliminating errors by seagrasses in coastal bathymetry surveying using drone and HD camera. Survey errors due to seagrasses were identified, segmentated and eliminated using a L∗a∗b color space model. Bathymetry survey using a drone and HD camera has many advantages over conventional survey methods such as ship-board acoustic sounder or manual level survey which are time consuming and expensive. However, errors caused by sea bed reflectance due to seagrasses habitat hamper the development of new surveying tool. Seagrasses are the flowering plants which start to grow in November and flourish to maximum density until April in Korea. We developed a new algorithm for identifying seagrasses habitat locations and eliminating errors due to seagrasses to get the accurate depth survey data. We tested our algorithm at Wolpo beach. Bathymetry survey data which were obtained using a drone with HD camera and calibrated to eliminate errors due to seagrasses, were compared with depth survey data obtained using ship-board multi-beam acoustic sounder. The abnormal bathymetry data which are defined as the excess of 1.5 times of a standard deviation of random errors, are composed of 8.6% of the test site of area of 200 m by 300 m. By applying the developed algorithm, 92% of abnnormal bathymetry data were successfully eliminated and 33% of RMS errors were reduced.

Development of the Multi-Parametric Mapping Software Based on Functional Maps to Determine the Clinical Target Volumes (임상표적체적 결정을 위한 기능 영상 기반 생물학적 인자 맵핑 소프트웨어 개발)

  • Park, Ji-Yeon;Jung, Won-Gyun;Lee, Jeong-Woo;Lee, Kyoung-Nam;Ahn, Kook-Jin;Hong, Se-Mie;Juh, Ra-Hyeong;Choe, Bo-Young;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.153-164
    • /
    • 2010
  • To determine the clinical target volumes considering vascularity and cellularity of tumors, the software was developed for mapping of the analyzed biological clinical target volumes on anatomical images using regional cerebral blood volume (rCBV) maps and apparent diffusion coefficient (ADC) maps. The program provides the functions for integrated registrations using mutual information, affine transform and non-rigid registration. The registration accuracy is evaluated by the calculation of the overlapped ratio of segmented bone regions and average distance difference of contours between reference and registered images. The performance of the developed software was tested using multimodal images of a patient who has the residual tumor of high grade gliomas. Registration accuracy of about 74% and average 2.3 mm distance difference were calculated by the evaluation method of bone segmentation and contour extraction. The registration accuracy can be improved as higher as 4% by the manual adjustment functions. Advanced MR images are analyzed using color maps for rCBV maps and quantitative calculation based on region of interest (ROI) for ADC maps. Then, multi-parameters on the same voxels are plotted on plane and constitute the multi-functional parametric maps of which x and y axis representing rCBV and ADC values. According to the distributions of functional parameters, tumor regions showing the higher vascularity and cellularity are categorized according to the criteria corresponding malignant gliomas. Determined volumes reflecting pathological and physiological characteristics of tumors are marked on anatomical images. By applying the multi-functional images, errors arising from using one type of image would be reduced and local regions representing higher probability as tumor cells would be determined for radiation treatment plan. Biological tumor characteristics can be expressed using image registration and multi-functional parametric maps in the developed software. The software can be considered to delineate clinical target volumes using advanced MR images with anatomical images.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Detection of Gaze Direction for the Hearing-impaired in the Intelligent Space (지능형 공간에서 청각장애인의 시선 방향 검출)

  • Oh, Young-Joon;Hong, Kwang-Jin;Kim, Jong-In;Jung, Kee-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.333-340
    • /
    • 2011
  • The Human-Computer Interaction(HCI) is a study of the method for interaction between human and computers that merges the ergonomics and the information technology. The intelligent space, which is a part of the HCI, is an important area to provide effective user interface for the disabled, who are alienated from the information-oriented society. In the intelligent space for the disabled, the method supporting information depends on types of disability. In this paper, we only support the hearing-impaired. It is material to the gaze direction detection method because it is very efficient information provide method to present information on gazing direction point, except for the information provide location perception method through directly contact with the hearing-impaired. We proposed the gaze direction detection method must be necessary in order to provide the residence life application to the hearing-impaired like this. The proposed method detects the region of the user from multi-view camera images, generates candidates for directions of gaze for horizontal and vertical from each camera, and calculates the gaze direction of the user through the comparison with the size of each candidate. In experimental results, the proposed method showed high detection rate with gaze direction and foot sensing rate with user's position, and showed the performance possibility of the scenario for the disabled.

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.

An Automatic Mobile Cell Counting System for the Analysis of Biological Image (생물학적 영상 분석을 위한 자동 모바일 셀 계수 시스템)

  • Seo, Jaejoon;Chun, Junchul;Lee, Jin-Sung
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.39-46
    • /
    • 2015
  • This paper presents an automatic method to detect and count the cells from microorganism images based on mobile environments. Cell counting is an important process in the field of biological and pathological image analysis. In the past, cell counting is done manually, which is known as tedious and time consuming process. Moreover, the manual cell counting can lead inconsistent and imprecise results. Therefore, it is necessary to make an automatic method to detect and count cells from biological images to obtain accurate and consistent results. The proposed multi-step cell counting method automatically segments the cells from the image of cultivated microorganism and labels the cells by utilizing topological analysis of the segmented cells. To improve the accuracy of the cell counting, we adopt watershed algorithm in separating agglomerated cells from each other and morphological operation in enhancing the individual cell object from the image. The system is developed by considering the availability in mobile environments. Therefore, the cell images can be obtained by a mobile phone and the processed statistical data of microorganism can be delivered by mobile devices in ubiquitous smart space. From the experiments, by comparing the results between manual and the proposed automatic cell counting we can prove the efficiency of the developed system.

Urban Object Classification Using Object Subclass Classification Fusion and Normalized Difference Vegetation Index (객체 서브 클래스 분류 융합과 정규식생지수를 이용한 도심지역 객체 분류)

  • Chul-Soo Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.223-232
    • /
    • 2023
  • A widely used method for monitoring land cover using high-resolution satellite images is to classify the images based on the colors of the objects of interest. In urban areas, not only major objects such as buildings and roads but also vegetation such as trees frequently appear in high-resolution satellite images. However, the colors of vegetation objects often resemble those of other objects such as buildings, roads, and shadows, making it difficult to accurately classify objects based solely on color information. In this study, we propose a method that can accurately classify not only objects with various colors such as buildings but also vegetation objects. The proposed method uses the normalized difference vegetation index (NDVI) image, which is useful for detecting vegetation objects, along with the RGB image and classifies objects into subclasses. The subclass classification results are fused, and the final classification result is generated by combining them with the image segmentation results. In experiments using Compact Advanced Satellite 500-1 imagery, the proposed method, which applies the NDVI and subclass classification together, showed an overall accuracy of 87.42%, while the overall accuracy of the subchannel classification technique without using the NDVI and the subclass classification technique alone were 73.18% and 81.79%, respectively.