• Title/Summary/Keyword: Image Extraction and Segmentation

Search Result 363, Processing Time 0.023 seconds

A Robust Algorithm for Moving Object Segmentation and VOP Extraction in Video Sequences (비디오 시퀸스에서 움직임 객체 분할과 VOP 추출을 위한 강력한 알고리즘)

  • Kim, Jun-Ki;Lee, Ho-Suk
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.430-441
    • /
    • 2002
  • Video object segmentation is an important component for object-based video coding scheme such as MPEG-4. In this paper, a robust algorithm for segmentation of moving objects in video sequences and VOP(Video Object Planes) extraction is presented. The points of this paper are detection, of an accurate object boundary by associating moving object edge with spatial object edge and generation of VOP. The algorithm begins with the difference between two successive frames. And after extracting difference image, the accurate moving object edge is produced by using the Canny algorithm and morphological operation. To enhance extracting performance, we app]y the morphological operation to extract more accurate VOP. To be specific, we apply morphological erosion operation to detect only accurate object edges. And moving object edges between two images are generated by adjusting the size of the edges. This paper presents a robust algorithm implementation for fast moving object detection by extracting accurate object boundaries in video sequences.

Text extraction from camera based document image (카메라 기반 문서영상에서의 문자 추출)

  • 박희주;김진호
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.14-20
    • /
    • 2003
  • This paper presents a text extraction method of camera based document image. It is more difficult to recognize camera based document image in comparison with scanner based image because of segmentation problem due to variable lighting condition and versatile fonts. Both document binarization and character extraction are important processes to recognize camera based document image. After converting color image into grey level image, gray level normalization is used to extract character region independent of lighting condition and background image. Local adaptive binarization method is then used to extract character from the background after the removal of noise. In this character extraction step, the information of the horizontal and vertical projection and the connected components is used to extract character line, word region and character region. To evaluate the proposed method, we have experimented with documents mixed Hangul, English, symbols and digits of the ETRI database. An encouraging binarization and character extraction results have been obtained.

  • PDF

A Study on the Asphalt Road Boundary Extraction Using Shadow Effect Removal (그림자영향 소거를 통한 아스팔트 도로 경계추출에 관한 연구)

  • Yun Kong-Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.123-129
    • /
    • 2006
  • High-resolution aerial color image offers great possibilities for geometric and semantic information for spatial data generation. However, shadow casts by buildings and trees in high-density urban areas obscure much of the information in the image giving rise to potentially inaccurate classification and inexact feature extraction. Though many researches have been implemented for solving shadow casts, few studies have been carried out about the extraction of features hindered by shadows from aerial color images in urban areas. This paper presents a asphalt road boundary extraction technique that combines information from aerial color image and LIDAR (LIght Detection And Ranging) data. The following steps have been performed to remove shadow effects and to extract road boundary from the image. First, the shadow regions of the aerial color image are precisely located using LEAR DSM (Digital Surface Model) and solar positions. Second, shadow regions assumed as road are corrected by shadow path reconstruction algorithms. After that, asphalt road boundary extraction is implemented by segmentation and edge detection. Finally, asphalt road boundary lines are extracted as vector data by vectorization technique. The experimental results showed that this approach was effective and great potential advantages.

Automatic Extraction of Component Window for Auto-Teaching of PCB Assembly Inspection Machines (PCB 조립검사기의 자동티칭을 위한 부품윈도우 자동추출 방법)

  • Kim, Jun-Oh;Park, Tae-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.11
    • /
    • pp.1089-1095
    • /
    • 2010
  • We propose an image segmentation method for auto-teaching system of PCB (Printed Circuit Board) assembly inspection machines. The inspection machine acquires images of all components in PCB, and then compares each image with its standard image to find the assembly errors such as misalignment, inverse polarity, and tombstone. The component window that is the area of component to be acquired by camera, is one of the teaching data for operating the inspection machines. To reduce the teaching time of the machine, we newly develop the image processing method to extract the component window automatically from the image of PCB. The proposed method segments the component window by excluding the soldering parts as well as board background. We binarize the input image by use of HSI color model because it is difficult to discriminate the RGB colors between components and backgrounds. The linear combination of the binarized images then enhances the component window from the background. By use of the horizontal and vertical projection of histogram, we finally obtain the component widow. The experimental results are presented to verify the usefulness of the proposed method.

Object Extraction Technique Adequate for Radial Shape's RADAR Signal Structure (방사선 레이다 신호 구조에 적합한 물체 추적 기법)

  • 김도현;박은경;차의영
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.7
    • /
    • pp.536-546
    • /
    • 2003
  • We propose an object extraction technique adequate for the radial shape's radar signal structure for the purpose of implementing ARPA(Automatic Radar Plotting Aid) installed in the vessel. The radar signal data are processed by interpolation and accumulation to acquire a qualified image. The objects of the radar image have characteristics of having different shape and size as it gets far from the center, and it is not adequate for clustering generally. Therefore, this study designs a new vigilance distance model of elliptical shape and adopts this model in the ART2 neural network. We prove that the proposed clustering method makes it possible to extract objects adaptively and to separate the connected objects effectively.

Block Classification of Document Images Using the Spatial Gray Level Dependence Matrix (SGLDM을 이용한 문서영상의 블록 분류)

  • Kim Joong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1347-1359
    • /
    • 2005
  • We propose an efficient block classification of the document images using the second-order statistical texture features computed from spatial gray level dependence matrix (SGLDM). We studied on the techniques that will improve the block speed of the segmentation and feature extraction speed and the accuracy of the detailed classification. In order to speedup the block segmentation, we binarize the gray level image and then segmented by applying smoothing method instead of using texture features of gray level images. We extracted seven texture features from the SGLDM of the gray image blocks and we applied these normalized features to the BP (backpropagation) neural network, and classified the segmented blocks into the six detailed block categories of small font, medium font, large font, graphic, table, and photo blocks. Unlike the conventional texture classification of the gray level image in aerial terrain photos, we improve the classification speed by a single application of the texture discrimination mask, the size of which Is the same as that of each block already segmented in obtaining the SGLDM.

  • PDF

Pulmonary vascular Segmentation and Refinement On the CT Scans (컴퓨터 단층 촬영 영상에서의 폐혈관 분할 및 정제)

  • Shin, Min-Jun;Kim, Do-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.3
    • /
    • pp.591-597
    • /
    • 2012
  • Medical device performance has been advanced while images are expected to be acquired with further higher quality and pertinent applicability as images have been increasing in importance in analyzing major organs. Recent high frequency of image processing by MATLAB in image analysis area accounts for the intent of this study to segment pulmonary vessels by means of MATLAB. This study is to consist of 3 phases including pulmonary region segmentation, pulmonary vessel segmentation and three dimensional connectivity assessment, in which vessel was segmented, using threshold level, from the pulmonary region segmented, vessel thickness was measured as two dimensional refining process and three dimensional connectivity was assessed as three dimensional refining process. It is expected that MATLAB-based image processing should contribute to diversity and reliability of medical image processing and that the study results may lay a foundation for chest CT images-related researches.

Semi-automatic Extraction of 3D Building Boundary Using DSM from Stereo Images Matching (영상 매칭으로 생성된 DSM을 이용한 반자동 3차원 건물 외곽선 추출 기법 개발)

  • Kim, Soohyeon;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1067-1087
    • /
    • 2018
  • In a study for LiDAR data based building boundary extraction, usually dense point cloud was used to cluster building rooftop area and extract building outline. However, when we used DSM generated from stereo image matching to extract building boundary, it is not trivial to cluster building roof top area automatically due to outliers and large holes of point cloud. Thus, we propose a technique to extract building boundary semi-automatically from the DSM created from stereo images. The technique consists of watershed segmentation for using user input as markers and recursive MBR algorithm. Since the proposed method only inputs simple marker information that represents building areas within the DSM, it can create building boundary efficiently by minimizing user input.

Extraction of Building Boundary on Aerial Image Using Segmentation and Overlaying Algorithm (분할과 중첩 기법을 이용한 항공 사진 상의 빌딩 경계 추출)

  • Kim, Yong-Min;Chang, An-Jin;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.49-58
    • /
    • 2012
  • Buildings become complex and diverse with time. It is difficult to extract individual buildings using only an optical image, because they have similar spectral characteristics to objects such as vegetation and roads. In this study, we propose a method to extract building area and boundary through integrating airborne Light Detection and Ranging(LiDAR) data and aerial images. Firstly, a binary edge map was generated using Edison edge detector after applying Adaptive dynamic range linear stretching radiometric enhancement algorithm to the aerial image. Secondly, building objects on airborne LiDAR data were extracted from normalized Digital Surface Model and aerial image. Then, a temporary building areas were extracted by overlaying the binary edge map and building objects extracted from LiDAR data. Finally, some building boundaries were additionally refined considering positional accuracy between LiDAR data and aerial image. The proposed method was applied to two experimental sites for validation. Through error matrix, F-measure, Jaccard coefficient, Yule coefficient, and Overall accuracy were calculated, and the values had a higher accuracy than 0.85.

A Comparison of Deep Reinforcement Learning and Deep learning for Complex Image Analysis

  • Khajuria, Rishi;Quyoom, Abdul;Sarwar, Abid
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.1-10
    • /
    • 2020
  • The image analysis is an important and predominant task for classifying the different parts of the image. The analysis of complex image analysis like histopathological define a crucial factor in oncology due to its ability to help pathologists for interpretation of images and therefore various feature extraction techniques have been evolved from time to time for such analysis. Although deep reinforcement learning is a new and emerging technique but very less effort has been made to compare the deep learning and deep reinforcement learning for image analysis. The paper highlights how both techniques differ in feature extraction from complex images and discusses the potential pros and cons. The use of Convolution Neural Network (CNN) in image segmentation, detection and diagnosis of tumour, feature extraction is important but there are several challenges that need to be overcome before Deep Learning can be applied to digital pathology. The one being is the availability of sufficient training examples for medical image datasets, feature extraction from whole area of the image, ground truth localized annotations, adversarial effects of input representations and extremely large size of the digital pathological slides (in gigabytes).Even though formulating Histopathological Image Analysis (HIA) as Multi Instance Learning (MIL) problem is a remarkable step where histopathological image is divided into high resolution patches to make predictions for the patch and then combining them for overall slide predictions but it suffers from loss of contextual and spatial information. In such cases the deep reinforcement learning techniques can be used to learn feature from the limited data without losing contextual and spatial information.