• Title/Summary/Keyword: Color-based Vision System

Search Result 168, Processing Time 0.029 seconds

Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System (능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구)

  • 이희명;이수희;이병룡;양순용;안경관
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

Computer Vision-based Method to Detect Fire Using Color Variation in Temporal Domain

  • Hwang, Ung;Jeong, Jechang;Kim, Jiyeon;Cho, JunSang;Kim, SungHwan
    • Quantitative Bio-Science
    • /
    • v.37 no.2
    • /
    • pp.81-89
    • /
    • 2018
  • It is commonplace that high false detection rates interfere with immediate vision-based fire monitoring system. To circumvent this challenge, we propose a fire detection algorithm that can accommodate color variations of RGB in temporal domain, aiming at reducing false detection rates. Despite interrupting images (e.g., background noise and sudden intervention), the proposed method is proved robust in capturing distinguishable features of fire in temporal domain. In numerical studies, we carried out extensive real data experiments related to fire detection using 24 video sequences, implicating that the propose algorithm is found outstanding as an effective decision rule for fire detection (e.g., false detection rate <10%).

Road Extraction Based on Random Forest and Color Correlogram (랜덤 포레스트와 칼라 코렐로그램을 이용한 도로추출)

  • Choi, Ji-Hye;Song, Gwang-Yul;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.346-352
    • /
    • 2011
  • This paper presents a system of road extraction for traffic images from a single camera. The road in the images is subject to large changes in appearance because of environmental effects. The proposed system is based on the integration of color correlograms and random forest. The color correlogram depicts the color properties of an image properly. Using the random forest, road extraction is formulated as a learning paradigm. The combined effects of color correlograms and random forest create a robust system capable of extracting the road in very changeable situations.

Analysis of Requirements for Night Vision Imaging System (야시조명계통 요구도 분석)

  • Kwon, Jong-Kwang;Lee, Dae-Yearl;Kim, Whan-Woo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.51-61
    • /
    • 2007
  • This paper concerns about the requirement analysis for night vision imaging system(NVIS), whose purpose is to intensify the available nighttime near infrared(IR) radiation sufficiently to be caught by the human eyes on a miniature green phosphor screen. The requirements for NVIS are NVIS radiance(NR), chromaticity, daylight legibility/readability, etc. The NR is a quantitative measure of night vision goggle (NVG) compatibility of a light source as viewed through goggles. The chromaticity is the quality of a color as determined by its purity and dominant wavelength. The daylight legibility/readability is the degree at which words are readable based on appearance and a measure of an instrument's ability to display incremental changes in its output value. In this paper, the requirements of NR, chromaticity, and daylight legibility/readability for Type I and Class B/C NVIS are analyzed. Also the rationale is shown with respect to those requirements.

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.1
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

A Vision Based Guideline Interpretation Technique for AGV Navigation (AGV 운행을 위한 비전기반 유도선 해석 기술)

  • Byun, Sungmin;Kim, Minhwan
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1319-1329
    • /
    • 2012
  • AGVs are more and more utilized nowadays and magnetic guided AGVs are most widely used because their system has low cost and high speed. But this type of AGVs requires high infrastructure building cost and has poor flexibility of navigation path layout changing. Thus it is hard to applying this type of AGVs to a small quantity batch production system or a cooperative production system with many AGVs. In this paper, we propose a vision based guideline interpretation technique that uses the cheap, easily installable and changeable color tapes (or paint) as a guideline. So a vision-based AGV with color tapes is effectively applicable to the production systems. For easy setting and changing of AGV navigation path, we suggest an automatic method for interpreting a complex guideline layout including multi-branches and joins of branches. We also suggest a trace direction decision method for stable navigation of AGVs. Through several real-time navigation tests with an industrial AGV installed with the suggested technique, we confirmed that the technique is practically and stably applicable to real industrial field.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

A Tracking-by-Detection System for Pedestrian Tracking Using Deep Learning Technique and Color Information

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.1017-1028
    • /
    • 2019
  • Pedestrian tracking is a particular object tracking problem and an important component in various vision-based applications, such as autonomous cars and surveillance systems. Following several years of development, pedestrian tracking in videos remains challenging, owing to the diversity of object appearances and surrounding environments. In this research, we proposed a tracking-by-detection system for pedestrian tracking, which incorporates a convolutional neural network (CNN) and color information. Pedestrians in video frames are localized using a CNN-based algorithm, and then detected pedestrians are assigned to their corresponding tracklets based on similarities between color distributions. The experimental results show that our system is able to overcome various difficulties to produce highly accurate tracking results.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.