• 제목/요약/키워드: Color-based Vision System

검색결과 168건 처리시간 0.022초

실시간 다중이동물체 추적에 의한 이동로봇의 위치개선 (Position Improvement of a Mobile Robot by Real Time Tracking of Multiple Moving Objects)

  • 진태석;이민중;탁한호;이인용;이준탁
    • 한국지능시스템학회논문지
    • /
    • 제18권2호
    • /
    • pp.187-192
    • /
    • 2008
  • 본 논문은 실내외 공간에서 인간을 포한함 이동물체의 영상정보를 이용하여 이동로봇의 자기위치를 인식하기 위한 방법을 제시하고 있다. 제시한 방법은 로봇자체의 DR센서 정보와 카메라에서 얻은 영상정보로부터 로봇의 위치추정방법을 결합한 것이다. 그리고 이동물체의 이전 위치정보와 관측 카메라의 모델을 사용하여 이동물체에 대한 영상프레임 좌표와 추정된 로봇위치간의 관계를 표현할 수 있는 식을 제시하고 있다. 또한 이동하는 인간과 로봇의 위치와 방향을 추정하기 위한 제어방법을 제시하고 이동로봇의 위치를 추정하기 위해서 칼만필터 방법을 적용하였다. 그리고 시뮬레이션 및 실험을 통하여 제시한 방법을 검증하였다.

Multi-Object Tracking using the Color-Based Particle Filter in ISpace with Distributed Sensor Network

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.46-51
    • /
    • 2005
  • Intelligent Space(ISpace) is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Simulations are carried out to evaluate the proposed performance. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

Machine Vision Technique for Rapid Measurement of Soybean Seed Vigor

  • Lee, Hoonsoo;Huy, Tran Quoc;Park, Eunsoo;Bae, Hyung-Jin;Baek, Insuck;Kim, Moon S.;Mo, Changyeun;Cho, Byoung-Kwan
    • Journal of Biosystems Engineering
    • /
    • 제42권3호
    • /
    • pp.227-233
    • /
    • 2017
  • Purpose: Morphological properties of soybean roots are important indicators of the vigor of the seed, which determines the survival rate of the seedlings grown. The current vigor test for soybean seeds is manual measurement with the human eye. This study describes an application of a machine vision technique for rapid measurement of soybean seed vigor to replace the time-consuming and labor-intensive conventional method. Methods: A CCD camera was used to obtain color images of seeds during germination. Image processing techniques were used to obtain root segmentation. The various morphological parameters, such as primary root length, total root length, total surface area, average diameter, and branching points of roots were calculated from a root skeleton image using a customized pixel-based image processing algorithm. Results: The measurement accuracy of the machine vision system ranged from 92.6% to 98.8%, with accuracies of 96.2% for primary root length and 96.4% for total root length, compared to manual measurement. The correlation coefficient for each measurement was 0.999 with a standard error of prediction of 1.16 mm for primary root length and 0.97 mm for total root length. Conclusions: The developed machine vision system showed good performance for the morphological measurement of soybean roots. This image analysis algorithm, combined with a simple color camera, can be used as an alternative to the conventional seed vigor test method.

신경회로망 기반 감성 인식 비젼 시스템 (Vision System for NN-based Emotion Recognition)

  • 이상윤;김성남;주영훈;박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2036-2038
    • /
    • 2001
  • In this paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using vision system. In the proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Also, we use R,G,B(red, green, blue) color image data and the gray image data to get the highly trust rate of feature point extraction. For this, we propose an algorithm to extract four feature points (eyebrow, eye, nose, mouth) from the face image acquired by the color CCD camera and find some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector(position and distance among the feature points). Finally, we show the practical application possibility of the proposed method.

  • PDF

천정부착 랜드마크 위치와 에지 화소의 이동벡터 정보에 의한 이동로봇 위치 인식 (Mobile Robot Localization using Ceiling Landmark Positions and Edge Pixel Movement Vectors)

  • 진홍신;아디카리 써얌프;김성우;김형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.368-373
    • /
    • 2010
  • A new indoor mobile robot localization method is presented. Robot recognizes well designed single color landmarks on the ceiling by vision system, as reference to compute its precise position. The proposed likelihood prediction based method enables the robot to estimate its position based only on the orientation of landmark.The use of single color landmarks helps to reduce the complexity of the landmark structure and makes it easily detectable. Edge based optical flow is further used to compensate for some landmark recognition error. This technique is applicable for navigation in an unlimited sized indoor space. Prediction scheme and localization algorithm are proposed, and edge based optical flow and data fusing are presented. Experimental results show that the proposed method provides accurate estimation of the robot position with a localization error within a range of 5 cm and directional error less than 4 degrees.

DEVELOPMENT OF AN INTEGRATED GRADER FOR APPLES

  • Park, K. H.;Lee, K. J.;Park, D. S.;Y. S. Han
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.III
    • /
    • pp.513-520
    • /
    • 2000
  • An integrated grader which measures soluble solid content, color and weight of fresh apples was developed by NAMRI. The prototype grader consists of the near infrared spectroscopy and machine vision system. Image processing system and an algorithm to evaluate color were developed to speed up the color evaluation of apples. To avoid the light glare and specular reflection, an half-spherical illumination chamber was designed and fabricated to detect the color images of spherical-shaped apples more precisely. A color revision model based on neural network was developed. Near-infrared(NIR) spectroscopy system using NIR reflectance method developed by Lee et al(1998) of NAMRI was used to evaluate soluble solid content. In order to observe the performance of the grader, tests were conducted on conditions that there are 3 classes in weight sorting, 4 classes in combination of color and soluble solid content, and thus 12 classes in combined sorting. The average accuracy in weight, color and soluble solid content is more than about 90 % with the capacity of 3 fruits per second.

  • PDF

형태분석과 피부색모델을 다층 퍼셉트론으로 사용한 운전자 얼굴추출 기법 (Driver face localization using morphological analysis and multi-layer preceptron as a skin-color model)

  • 이종수
    • 한국정보전자통신기술학회논문지
    • /
    • 제6권4호
    • /
    • pp.249-254
    • /
    • 2013
  • In the area of computer vision, face recognition is being intensively researched. It is generally known that before a face is recognized it must be localized. Skin-color information is an important feature to segment skin-color regions. To extract skin-color regions the skin-color model based on multi-layer perceptron has been proposed. Extracted regions are analyzed to emphasize ellipsoidal regions. The results from this study show good accuracy for our vehicle driver face detection system.

컬러 영상 모델에 기반한 에지 추출기법 (Edge Extraction Method Based on Color Image Model)

  • 김태은
    • 디지털콘텐츠학회 논문지
    • /
    • 제4권1호
    • /
    • pp.11-21
    • /
    • 2003
  • 컴퓨터 비전 분야에 있어서 컬러 영상이 보다. 많은 정보를 포함하고 있음에도 불구하고 90년대 후반까지는 주로 흑백 영상(gray level image)을 대상으로 하여 연구가 이루어져 왔으며, 2000년대 들어서야 컬러 영상(color image)에 대한 연구가 활발히 진행되기 시작했다. 그 동안의 연구 결과들은 흑백 영상에서도 깊이 추정에 필요한 정보를 충분히 얻을 수 있음을 보여주지만 보다 나은 결과를 위해 컬러 정보의 이용은 필수적이다. 본 논문에서는 Opponet Color Model(OCM)에 기반한 에지 추출 기법을 제안 한다. Opponet Color Model이란 인간의 컬러 인식 과정을 연구하던 중 개발된 모델로서 망막의 세포에 감지된 영상이 뇌에 전달되기까지의 과정을 실제로 모델링 한다. 일반적으로 인간의 뇌는 눈으로부터 오는 적(red), 녹(green), 청(blue)의 정보를 각각 따로 입력 받아 컬러를 인식한는 것으로 알려져 있다. 그러나 OCM은 컬러 정보가 전달되는 과정에서 중간의 매개 세포를 거침으로 해서 어떠한 변화가 가해짐을 보여주는데 이러한 과정을 Opponet Color Processing이라 한다. 본 논문에서는 컬러 영상을 이용함에 있어 이미 기존의 여러 모델이 존재 하나 Opponet Color Model에 기반한 에지 추출 기법이 보다 우수함을 보인다.

  • PDF

다중 파장 근적외선 LED조명에 의한 컬러영상 획득 (Color Image Acquired by the Multispectral Near-IR LED Lights)

  • 김아리;김홍석;박영식;박승옥
    • 조명전기설비학회논문지
    • /
    • 제30권2호
    • /
    • pp.1-10
    • /
    • 2016
  • A system which provides multispectral near-IR and visible gray images of objects is constructed and an algorithm is derived to acquire a natural color image of objects from the gray images. A color image of 24 color patches is obtained by recovering their CIE (International Commission on Illumination) LAB color coordinates $L^*$, $a^*$, $b^*$ from their gray images using the algorithm based on polynomial regression. The system is composed of a custom-designed LED illuminator emitting multispectral near-IR illuminations, fluorescent lamps and a monochrome digital camera. Color reproducibility of the algorithm is estimated in CIELAB color difference ${\Delta}E^*_{ab}$. And as a result, if yellow and magenta color patches with around 10 ${\Delta}E^*_{ab}$ are disregarded, the average ${\Delta}E^*_{ab}$ is 2.9, and this value is within the acceptability tolerance for quality evaluation for digital color complex image.

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.