• Title/Summary/Keyword: Foreground Analysis

Search Result 53, Processing Time 0.034 seconds

Foreground Extraction and Depth Map Creation Method based on Analyzing Focus/Defocus for 2D/3D Video Conversion (2D/3D 동영상 변환을 위한 초점/비초점 분석 기반의 전경 영역 추출과 깊이 정보 생성 기법)

  • Han, Hyun-Ho;Chung, Gye-Dong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.1
    • /
    • pp.243-248
    • /
    • 2013
  • In this paper, depth of foreground is analysed by focus and color analysis grouping for 2D/3D video conversion and depth of foreground progressing method is preposed by using focus and motion information. Candidate foreground image is generated by estimated movement of image focus information for extracting foreground from 2D video. Area of foreground is extracted by filling progress using color analysis on hole area of inner object existing candidate foreground image. Depth information is generated by analysing value of focus existing on actual frame for allocating depth at generated foreground area. Depth information is allocated by weighting motion information. Results of previous proposed algorithm is compared with proposed method from this paper for evaluating the quality of generated depth information.

Removing Shadows for the Surveillance System Using a Video Camera (비디오 카메라를 이용한 감시 장치에서 그림자의 제거)

  • Kim, Jung-Dae;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.176-178
    • /
    • 2005
  • In the images of a video camera employed for surveillance, detecting targets by extracting foreground image is of great importance. The foreground regions detected, however, include not only moving targets but also their shadows. This paper presents a novel technique to detect shadow pixels in the foreground image of a video camera. The image characteristics of video cameras employed, a web-cam and a CCD, are first analysed in the HSV color space and a pixel-level shadow detection technique is proposed based on the analysis. Compared with existing techniques where unified criteria are used to all pixels, the proposed technique determines shadow pixels utilizing a fact that the effect of shadowing to each pixel is different depending on its brightness in background image. Such an approach can accommodate local features in an image and hold consistent performance even in changing environment. In experiments targeting pedestrians, the proposed technique showed better results compared with an existing technique.

  • PDF

Codebook-Based Foreground Extraction Algorithm with Continuous Learning of Background (연속적인 배경 모델 학습을 이용한 코드북 기반의 전경 추출 알고리즘)

  • Jung, Jae-Young
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.449-455
    • /
    • 2014
  • Detection of moving objects is a fundamental task in most of the computer vision applications, such as video surveillance, activity recognition and human motion analysis. This is a difficult task due to many challenges in realistic scenarios which include irregular motion in background, illumination changes, objects cast shadows, changes in scene geometry and noise, etc. In this paper, we propose an foreground extraction algorithm based on codebook, a database of information about background pixel obtained from input image sequence. Initially, we suppose a first frame as a background image and calculate difference between next input image and it to detect moving objects. The resulting difference image may contain noises as well as pure moving objects. Second, we investigate a codebook with color and brightness of a foreground pixel in the difference image. If it is matched, it is decided as a fault detected pixel and deleted from foreground. Finally, a background image is updated to process next input frame iteratively. Some pixels are estimated by input image if they are detected as background pixels. The others are duplicated from the previous background image. We apply out algorithm to PETS2009 data and compare the results with those of GMM and standard codebook algorithms.

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.

Head Detection based on Foreground Pixel Histogram Analysis (전경픽셀 히스토그램 분석 기반의 머리영역 검출 기법)

  • Choi, Yoo-Joo;Son, Hyang-Kyoung;Park, Jung-Min;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.179-186
    • /
    • 2009
  • In this paper, we propose a head detection method based on vertical and horizontal pixel histogram analysis in order to overcome drawbacks of the previous head detection approach using Haar-like feature-based face detection. In the proposed method, we create the vertical and horizontal foreground pixel histogram images from the background subtraction image, which represent the number of foreground pixels in the same vertical or horizontal position. Then we extract feature points of a head region by applying Harris corner detection method to the foreground pixel histogram images and by analyzing corner points. The proposal method shows robust head detection results even in the face image covering forelock by hairs or the back view image in which the previous approaches cannot detect the head regions.

The Development of Vehicle Counting System at Intersection Using Mean Shift (Mean Shift를 이용한 교차로 교통량 측정 시스템 개발)

  • Chun, In-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.3
    • /
    • pp.38-47
    • /
    • 2008
  • A vehicle counting system at intersection is designed and implemented using analyzing a video stream from a camera. To separate foreground image from background, we compare three different methods, among which Li's method is chosen. Blobs are extracted from the foreground image using connected component analysis and the blobs are tracked by a blob tracker, frame by frame. The primary tracker use only the size and location of blob in foreground image. If there is a collision between blobs, the mean-shift tracking algorithm based on color distribution of blob is used. The proposed system is tested using real video data at intersection. If some huristics is applied, the system shows a good detection rate and a low error rate.

  • PDF

Linkage Analysis of the Three Loci Determining Rind Color and Stripe Pattern in Watermelon

  • Yang, Hee-Bum;Park, Sung-woo;Park, Younghoon;Lee, Gung Pyo;Kang, Sun-Cheol;Kim, Yong Kwon
    • Horticultural Science & Technology
    • /
    • v.33 no.4
    • /
    • pp.559-565
    • /
    • 2015
  • The rind phenotype of watermelon fruits is an important agronomic characteristic in the watermelon market. Inheritance and linkage analyses were performed for three rind-related traits that together determine the rind phenotype: foreground stripe pattern, rind background color, and depth of rind color. The inheritance of the foreground stripe pattern was analyzed using three different $F_2$ populations, showing that the striped pattern is dominant over the non-striped pattern. The inheritance analysis of the rind background color was performed using $F_2$ populations of the '10909' and '109905', and the depth of rind color was analyzed using $F_2$ populations of the '90509' and '109905'. Yellow color was found to be dominant over green color, and a deep color was dominant over the standard color. Linkage analysis of the three traits was conducted using three $F_2$ populations in which two traits were segregating. Each pair of traits was inherited independently, which demonstrated that the three traits are not linked. Therefore, we propose a three-locus model for the determination of rind phenotype, providing novel insight that rind phenotype is determined by the combination of three genetically independent loci.

Background Subtraction in Dynamic Environment based on Modified Adaptive GMM with TTD for Moving Object Detection

  • Niranjil, Kumar A.;Sureshkumar, C.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.372-378
    • /
    • 2015
  • Background subtraction is the first processing stage in video surveillance. It is a general term for a process which aims to separate foreground objects from a background. The goal is to construct and maintain a statistical representation of the scene that the camera sees. The output of background subtraction will be an input to a higher-level process. Background subtraction under dynamic environment in the video sequences is one such complex task. It is an important research topic in image analysis and computer vision domains. This work deals background modeling based on modified adaptive Gaussian mixture model (GMM) with three temporal differencing (TTD) method in dynamic environment. The results of background subtraction on several sequences in various testing environments show that the proposed method is efficient and robust for the dynamic environment and achieves good accuracy.

A Technique to Detect the Shadow Pixels of Moving Objects in the Images of a Video Camera (비디오 카메라 영상 내 동적 물체의 그림자 화소 검출 기법)

  • Park Su-Woo;Kim Jungdae;Do Yongtae
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1314-1321
    • /
    • 2005
  • In video surveillance and monitoring (VSAM), extracting foreground by detecting moving regions is the most fundamental step. The foreground extracted, however, includes not only objects in motion but also their shadows, which may cause errors in following video image processing steps. To remove the shadows, this paper presents a new technique to determine shadow pixels in the foreground image of a VSAM camera system. The proposed technique utilizes a fact that the effect of shadowing to each pixel is different defending on its brightness in a background image when determining shadow pixels unlike existing techniques where unified decision criteria are used to all pixels. Such an approach can easily accommodate local features in an image and hold consistent Performance even in changing environment. In real experiments, the proposed technique showed better results compared with an existing technique.

  • PDF

Searching Spectrum Band of Crop Area Based on Deep Learning Using Hyper-spectral Image (초분광 영상을 이용한 딥러닝 기반의 작물 영역 스펙트럼 밴드 탐색)

  • Gwanghyeong Lee;Hyunjung Myung;Deepak Ghimire;Donghoon Kim;Sewoon Cho;Sunghwan Jeong;Bvouneiun Kim
    • Smart Media Journal
    • /
    • v.13 no.8
    • /
    • pp.39-48
    • /
    • 2024
  • Recently, various studies have emerged that utilize hyperspectral imaging for crop growth analysis and early disease diagnosis. However, the challenge of using numerous spectral bands or finding the optimal bands for crop area remains a difficult problem. In this paper, we propose a method of searching the optimized spectral band of crop area based on deep learning using the hyper-spectral image. The proposed method extracts RGB images within hyperspectral images to segment background and foreground area through a Vision Transformer-based Seformer. The segmented results project onto each band of gray-scale converted hyperspectral images. It determines the optimized spectral band of the crop area through the pixel comparison of the projected foreground and background area. The proposed method achieved foreground and background segmentation performance with an average accuracy of 98.47% and a mIoU of 96.48%. In addition, it was confirmed that the proposed method converges to the NIR regions closely related to the crop area compared to the mRMR method.