• Title/Summary/Keyword: blob segmentation

Search Result 16, Processing Time 0.02 seconds

Light Source Target Detection Algorithm for Vision-based UAV Recovery

  • Won, Dae-Yeon;Tahk, Min-Jea;Roh, Eun-Jung;Shin, Sung-Sik
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.9 no.2
    • /
    • pp.114-120
    • /
    • 2008
  • In the vision-based recovery phase, a terminal guidance for the blended-wing UAV requires visual information of high accuracy. This paper presents the light source target design and detection algorithm for vision-based UAV recovery. We propose a recovery target design with red and green LEDs. This frame provides the relative position between the target and the UAV. The target detection algorithm includes HSV-based segmentation, morphology, and blob processing. These techniques are employed to give efficient detection results in day and night net recovery operations. The performance of the proposed target design and detection algorithm are evaluated through ground-based experiments.

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

A Study on the Surveillance System of Multiple Object's Dangerous Behaviors (다중 객체의 위험 행동 감시 시스템 연구)

  • Shim, Young-Bin;Park, Hwa-Jin
    • Journal of Digital Contents Society
    • /
    • v.14 no.4
    • /
    • pp.455-462
    • /
    • 2013
  • This paper proposes a detection system that, by determining whether a dangerous act is being carried out among other pedestrians in the images captured using CCTV, provides pre-warnings and establishes emergency measures. To determine the presence of a dangerous act, after setting zones of interest and danger zones within those zones of interest, the danger level is determined in accordance with the range of encroachment upon detecting an object. Especially, this research aims at detecting a suicide jump from the bridge and extends to detecting a dangerous act among pedestrians from detecting a dangerous act of only one person with no one in the previous research. This system classifies the status into 3 levels as safe, alert, and danger according to the amount of part being over the bridge railing. If a situation is deemed as warning-worthy and emergency, the integrated control center is immediately alerted to facilitate prevention in advance.

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.