• Title/Summary/Keyword: object motion

Search Result 1,044, Processing Time 0.023 seconds

Object Tracking based on Weight Sharing CNN Structure according to Search Area Setting Method Considering Object Movement (객체의 움직임을 고려한 탐색영역 설정에 따른 가중치를 공유하는 CNN구조 기반의 객체 추적)

  • Kim, Jung Uk;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.7
    • /
    • pp.986-993
    • /
    • 2017
  • Object Tracking is a technique for tracking moving objects over time in a video image. Using object tracking technique, many research are conducted such a detecting dangerous situation and recognizing the movement of nearby objects in a smart car. However, it still remains a challenging task such as occlusion, deformation, background clutter, illumination variation, etc. In this paper, we propose a novel deep visual object tracking method that can be operated in robust to many challenging task. For the robust visual object tracking, we proposed a Convolutional Neural Network(CNN) which shares weight of the convolutional layers. Input of the CNN is a three; first frame object image, object image in a previous frame, and current search frame containing the object movement. Also we propose a method to consider the motion of the object when determining the current search area to search for the location of the object. Extensive experimental results on a authorized resource database showed that the proposed method outperformed than the conventional methods.

Painterly Stroke Generation using Object Motion Analysis (객체의 움직임 해석을 이용한 회화적 스트로크 생성 방법)

  • Lee, Ho-Chang;Seo, Sang-Hyun;Ryoo, Seung-Tack;Yoon, Kyung-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.4
    • /
    • pp.239-245
    • /
    • 2010
  • Previous painterly rendering techniques normally use image gradients for stroke generation. Although image gradients are useful for expressing object shapes, it is difficult to express the flow or movements of objects of objects. In real painting, the use of brush strokes corresponding to the actual movement of objects allows viewers to recognize objects’ motion better and express the liveliness of the objects much more. In this paper, we propose a novel painterly stroke generation algorithm to express dynamic objects based on their motion information. We first extract motion information (magnitude, direction) of a scene from a set of image sequences from the same view. Then the motion directions are used for determining stroke orientations in the regions with significant motions. Where little motion is observed, image gradients are used for determining stroke orientations. Our algorithm is useful for realistically and dynamically representing moving objects.

A method of describing and retrieving a sequence of moving object using Shape Variation Map (모양 변화 축적도를 이용한 움직이는 객체의 표현 및 검색 방법)

  • Choi, Min-Seok;Kim, Whoi-Yul
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.1-6
    • /
    • 2004
  • Motion Information in a video clip often plays an important role in characterizing the content of the clip. A number of methods have been developed to analyze and retrieve video contents using motion information. However, most of these methods focused more on the analysis of direction or trajectory of motion but less on the analysis of the movement of an object. In this paper, we introduce the shape variation descriptor for describing shape variation caused by object movement along time, and propose a method to describe and retrieve the shape variation of the object using shape variation map. The experimental results shows that the proposed method performed much better than the previous method by l1% and is very effective for describing the shape variation which is applicable to semantic retrieval applications.

Moving Object Preserving Seamline Estimation (이동 객체를 보존하는 시접선 추정 기술)

  • Gwak, Moonsung;Lee, Chanhyuk;Lee, HeeKyung;Cheong, Won-Sik;Yang, Seungjoon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.992-1001
    • /
    • 2019
  • In many applications, images acquired from multiple cameras are stitched to form an image with a wide viewing angle. We propose a method of estimating a seam line using motion information to stitch multiple images without distortion of the moving object. Existing seam estimation techniques usually utilize an energy function based on image gradient information and parallax. In this paper, we propose a seam estimation technique that prevents distortion of moving object by adding temporal motion information, which is calculated from the gradient information of each frame. We also propose a measure to quantify the distortion level of stitched images and to verify the performance differences between the existing and proposed methods.

DETERMINING 3-D MOTION OF RIGID OBJECTS USING LINE CORRESPONDENCES

  • Kim, Won-Kyu
    • Journal of Astronomy and Space Sciences
    • /
    • v.11 no.2
    • /
    • pp.273-280
    • /
    • 1994
  • A linear method for determining three-dimensional motion of a rigid object is presented. In this method, two three-dimensional line correspondences are used. By using three-dimensional information of the features and observing that the rotation is unique regardless of the translation vector, the two components of motion parameters (rotation and translation) are computed separately. Also in this paper, the solution is given without a scale factor which is necessary in other methods that use only the two-dimensional projective constraints.

  • PDF

A Study on the Implementation of the Motion Tracing ASIC Based on the Edge Detection (윤곽선 검출에 바탕을 둔 움직임 추적 ASIC 구현에 관한 연구)

  • 김희걸;조경순
    • Proceedings of the IEEK Conference
    • /
    • 2000.11b
    • /
    • pp.112-115
    • /
    • 2000
  • This paper describes the algorithm, architecture and design of the circuit implementing motion tracing features based on the edge detection. The Sobel operation was used to compute the edges of moving objects. Motion tracing is performed by searching for the center of the edges for each frame and adding those centers. The edger and the centers of the moving object from camera were displayed in the monitor and verified using Xillinx FPGA.

  • PDF

Motion Estimation Using Feature Matching and Strongly Coupled Recurrent Module Fusion (특징정합과 순환적 모듈융합에 의한 움직임 추정)

  • 심동규;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.59-71
    • /
    • 1994
  • This paper proposes a motion estimation method in video sequences based on the feature based matching and anistropic propagation. It measures translation and rotation parameters using a relaxation scheme at feature points and object orinted anistropic propagation in continuous and discontinuous regions. Also an iterative improvement motion extimation based on the strongly coupled module fusion and adaptive smoothing is proposed. Computer simulation results show the effectiveness of the proposed algorithm.

  • PDF

Motion-Compensated Interpolation for Non-moving Caption Region (정지자막 영역의 움직임 보상 보간 기법)

  • Lee, Jeong-Hun;Han, Dong-Il
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.363-364
    • /
    • 2007
  • In this paper, we present a novel motion-compensated interpolation technique for non-moving caption region to prevent the block artifacts due to the failure of conventional block-based motion estimation algorithm on the block is consist of non-moving caption and moving object. Experimental results indicate good performance of the proposed scheme with significantly reduced block artifacts on image sequence that include non-moving caption. Also the proposed method is simple and adequate for hardware implementation.

  • PDF

Interaction Augmented Reality System using a Hand Motion (손동작을 이용한 상호작용 증강현실 시스템)

  • Choi, Kwang-Woon;Jung, Da-Un;Lee, Suk-Han;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.425-438
    • /
    • 2012
  • In this paper, We propose Augmented Reality (AR) System for the interaction between user's hand motion and virtual object motion based on computer vision. The previous AR system provides inconvenience to user because the users have to control the marker and the sensor like a tracker. We solved the problem through hand motion and provide the convenience to the user. Also the motion of virtual object using a physical phenomenon gives a reality. The proposed system obtains geometrical information by the marker and hand. The system environments like virtual space of moving virtual ball and bricks are made by using the geometrical information and user's hand motion is obtained from the hand's information with extracted feature point through the taping hand. And it registers a virtual plane stably by getting movement of the feature points. The movement of the virtual ball basically is parabolic motion with a parabolic equation. When the collision occurs either the planes or the bricks, we show movement of the virtual ball with ball position and normal vector of plane and the ball position is faulted. So we showed corrected ball position through experiment. and we proved that this system can replaced the marker system to compare to jitter of augmented virtual object and progress speed with it.