• 제목/요약/키워드: Variation Object

검색결과 392건 처리시간 0.02초

템플릿 변형과 Level-Set이론을 이용한 비강성 객체 추적 알고리즘 (A Robust Algorithm for Tracking Non-rigid Objects Using Deformed Template and Level-Set Theory)

  • 김종렬;나현태;문영식
    • 전자공학회논문지CI
    • /
    • 제40권3호
    • /
    • pp.127-136
    • /
    • 2003
  • 본 논문에서는 템플릿 변형과 Level-Set 이론을 사용하여 모델과 에지 기반의 객체 추적 방법을 제안한다. 제안된 방법은 배경의 변화, 객체 자체의 모양변화, 객체간의 겹침 등이 있는 경우에도 객체를 추적할 수 있다. 먼저, 객체 추적을 위해 템플릿과 목적 프레임간의 상호 영역 차이(Inter-region distance)와 에지 값으로 구성된 에너지 함수 PDEF(Potential Difference Energy Function)를 새롭게 정의한다. 이 함수는 객체 위치 및 경계 예측과 객체 모양 재결정 단계에서 사용된다. 객체 위치 및 경계 예측 단계에서는 객체의 변화가 어파인(affine) 변형을 따른다는 가정 하에 객체의 대략적인 모양 및 위치를 예측한다. 객체 모양 재결정 단계에서는 퍼텐셜 에너지 지도(Potential energy map)와 수정된 Level-Set 운동 함수를 사용하여 객체의 정확한 형태를 재결정한다. 실험결과에서 제안된 방법은 기존의 방법보다 배경의 변화가 큰경우, 객체 자체의 모양변화가 심한 경우, 객체간의 겹침이 있는 경우 등 다양한 상황이 포함된 동영상에서 정확하게 객체를 추적할 수 있음을 확인할 수 있다.

The Compensation of Machine Vision Image Distortion

  • Chung, Yi-Chan;Hsu, Yau-Wen;Lin, Yu-Tang;Tsai, Chih-Hung
    • International Journal of Quality Innovation
    • /
    • 제5권1호
    • /
    • pp.68-84
    • /
    • 2004
  • The measured values of a same object should remain constant regardless of the object's position in the image. In other words, its measured values should not vary as its position in the image changes. However, lens' image distortion, heterogeneous light source, varied angle between the measuring apparatus and the object, and different surroundings where the testing is set up will all cause variation in the measurement of the object when the object's position in the image changes. This research attempts to compensate the machine vision image distortion caused by the object's position in the image by developing the compensation table. The compensation is accomplished by facilitating users to obtain the correcting object and serves the objective of improving the precision of measurement.

Sonar-based yaw estimation of target object using shape prediction on viewing angle variation with neural network

  • Sung, Minsung;Yu, Son-Cheol
    • Ocean Systems Engineering
    • /
    • 제10권4호
    • /
    • pp.435-449
    • /
    • 2020
  • This paper proposes a method to estimate the underwater target object's yaw angle using a sonar image. A simulator modeling imaging mechanism of a sonar sensor and a generative adversarial network for style transfer generates realistic template images of the target object by predicting shapes according to the viewing angles. Then, the target object's yaw angle can be estimated by comparing the template images and a shape taken in real sonar images. We verified the proposed method by conducting water tank experiments. The proposed method was also applied to AUV in field experiments. The proposed method, which provides bearing information between underwater objects and the sonar sensor, can be applied to algorithms such as underwater localization or multi-view-based underwater object recognition.

객체의 움직임을 고려한 탐색영역 설정에 따른 가중치를 공유하는 CNN구조 기반의 객체 추적 (Object Tracking based on Weight Sharing CNN Structure according to Search Area Setting Method Considering Object Movement)

  • 김정욱;노용만
    • 한국멀티미디어학회논문지
    • /
    • 제20권7호
    • /
    • pp.986-993
    • /
    • 2017
  • Object Tracking is a technique for tracking moving objects over time in a video image. Using object tracking technique, many research are conducted such a detecting dangerous situation and recognizing the movement of nearby objects in a smart car. However, it still remains a challenging task such as occlusion, deformation, background clutter, illumination variation, etc. In this paper, we propose a novel deep visual object tracking method that can be operated in robust to many challenging task. For the robust visual object tracking, we proposed a Convolutional Neural Network(CNN) which shares weight of the convolutional layers. Input of the CNN is a three; first frame object image, object image in a previous frame, and current search frame containing the object movement. Also we propose a method to consider the motion of the object when determining the current search area to search for the location of the object. Extensive experimental results on a authorized resource database showed that the proposed method outperformed than the conventional methods.

Object Tracking with Radical Change of Color Distribution Using EM algorithm

  • 황인택;최광남
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 한국컴퓨터종합학술대회 논문집 Vol.33 No.1 (B)
    • /
    • pp.388-390
    • /
    • 2006
  • This paper presents an object tracking with radical change of color. Conventional Mean Shift do not provide appropriate result when major color distribution disappear. Our tracking approach is based on Mean Shift as basic tracking method. However we propose tracking algorithm that shows good results for an object of radical variation. The key idea is iterative update previous color information of an object that shows different color by using EM algorithm. As experiment results, we show that our proposed algorithm is an effective approach in tracking for a real object include an object having radical change of color.

  • PDF

관찰거리와 시각에 따른 색채의 면적효과에 관한 연구 (A Study on the Area Effect of Color by the Observing Distance and the Sight Angle)

  • 이진숙;임오연;이덕형
    • 한국실내디자인학회논문집
    • /
    • 제15권4호
    • /
    • pp.123-128
    • /
    • 2006
  • The purpose of the research is to estimate the amount of a color image reaction variation by changing areas in order to design the method to reduce an error about the color sample when it is applied in the real situation. The summary of the results acquired in this research is as followed. (1) With fixed observing distance of 1m, we observed that the value and chroma of each color object became higher as sight angle was increased as $2^{\circ},\;10^{\circ},\;and\;30^{\circ}$, even though the variation ratio was different. (2) With fixed sight angle of 10, we observed that the value and chroma of each color object becane higher as observing distance was changed from 1m to 3.3m, even though the variation ratio was different. (3) With same area, we observed that the values and chromas of each color object in the conditions of $1m-30^{\circ}\;and\;3.3m-10^{\circ}$ were almost same. (4) When the area became larger, the subjects tended to feel that colors were bright and clear with the increase of tone. In all the colors, the variation of a color reaction in chroma is higher than those in value. In future, we can observe the limit in applying to colors in the architecture by identifying the tendency of the color change according to the area change qualitatively.

단시안 명암강도를 이용한 물체의 3차원 거리측정 (Obtaining 3-D Depth from a Monochrome Shaded Image)

  • Byung Il Kim
    • 전자공학회논문지B
    • /
    • 제29B권7호
    • /
    • pp.52-61
    • /
    • 1992
  • 본 논문은 단시안에 투영된 3차원 물체의 Image에서 측정된 명암강도의 차이를 이용하여 3차원 물체의 절대거리 z 및 형상을 유출하는 수치적인 방법을 연구, 단시안에 의해서도 Camera와 물체사이의 3차원 절대거리가 구해질 수 있음을 보여주고 있다. 기발표된 이론과는 다르게 본 논문에서는 점광원을 이용하여 투영된 명암강도와 3차원 물체의 절대거리 및 형상과의 관계를 물체가 Uniform Lambertian이라는 가정하에서 새로운 관계식으로 정립하였다. 정립된 Non-Linear 관계식은 Smoothness 조건아래 $'Calculus of Variation$'방법을 사용하여 수학적 Algorithm으로 정리되어 Programming 되었고 간단한 실험방법을 이용하여 실제 Data에 적용시켜 그 타당성을 조사하였다.당성을 조사하였다.

  • PDF

Development of an Edge-Based Algorithm for Moving-Object Detection Using Background Modeling

  • Shin, Won-Yong;Kabir, M. Humayun;Hoque, M. Robiul;Yang, Sung-Hyun
    • Journal of information and communication convergence engineering
    • /
    • 제12권3호
    • /
    • pp.193-197
    • /
    • 2014
  • Edges are a robust feature for object detection. In this paper, we present an edge-based background modeling method for the detection of moving objects. The edges in the image frames were mapped using robust Canny edge detector. Two edge maps were created and combined to calculate the ultimate moving-edge map. By selecting all the edge pixels of the current frame above the defined threshold of the ultimate moving edges, a temporary background-edge map was created. If the frequencies of the temporary background edge pixels for several frames were above the threshold, then those edge pixels were treated as background edge pixels. We conducted a performance comparison with previous works. The existing edge-based moving-object detection algorithms pose some difficulty due to the changes in background motion, object shape, illumination variation, and noises. The result of the performance evaluation shows that the proposed algorithm can detect moving objects efficiently in real-world scenarios.

물체-행동 컨텍스트를 이용하는 확률 그래프 기반 물체 범주 인식 (Probabilistic Graph Based Object Category Recognition Using the Context of Object-Action Interaction)

  • 윤성백;배세호;박한재;이준호
    • 한국통신학회논문지
    • /
    • 제40권11호
    • /
    • pp.2284-2290
    • /
    • 2015
  • 다양한 외형 변화를 가지는 물체의 범주 인식성능을 향상 시키는데 있어서 사람의 행동은 매우 효과적인 컨텍스트 정보이다. 본 연구에서는 Bayesian 접근법을 기반으로 하는 간단한 확률 그래프 모델을 통해 사람의 행동을 물체 범주 인식을 위한 컨텍스트 정보로 활용하였다. 다양한 외형의 컵, 전화기, 가위 그리고 스프레이 물체에 대해 실험을 수행한 결과 물체의 용도에 대한 사람의 행동을 인식함으로써 물체 인식 성능을 8%~28%개선할 수 있었다.

물체 특징과 실시간 학습 기반의 파티클 필터를 이용한 이동 로봇에서의 강인한 물체 추적 (Robust Object Tracking in Mobile Robots using Object Features and On-line Learning based Particle Filter)

  • 이형호;최학남;김형래;마승완;이재홍;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제18권6호
    • /
    • pp.562-570
    • /
    • 2012
  • This paper proposes a robust object tracking algorithm using object features and on-line learning based particle filter for mobile robots. Mobile robots with a side-view camera have problems as camera jitter, illumination change, object shape variation and occlusion in variety environments. In order to overcome these problems, color histogram and HOG descriptor are fused for efficient representation of an object. Particle filter is used for robust object tracking with on-line learning method IPCA in non-linear environment. The validity of the proposed algorithm is revealed via experiments with DBs acquired in variety environment. The experiments show that the accuracy performance of particle filter using combined color and shape information associated with online learning (92.4 %) is more robust than that of particle filter using only color information (71.1 %) or particle filter using shape and color information without on-line learning (90.3 %).