• Title/Summary/Keyword: object motion

Search Result 1,044, Processing Time 0.027 seconds

Moving Object Detection using Clausius Entropy and Adaptive Gaussian Mixture Model (클라우지우스 엔트로피와 적응적 가우시안 혼합 모델을 이용한 움직임 객체 검출)

  • Park, Jong-Hyun;Lee, Gee-Sang;Toan, Nguyen Dinh;Cho, Wan-Hyun;Park, Soon-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.22-29
    • /
    • 2010
  • A real-time detection and tracking of moving objects in video sequences is very important for smart surveillance systems. In this paper, we propose a novel algorithm for the detection of moving objects that is the entropy-based adaptive Gaussian mixture model (AGMM). First, the increment of entropy generally means the increment of complexity, and objects in unstable conditions cause higher entropy variations. Hence, if we apply these properties to the motion segmentation, pixels with large changes in entropy in moments have a higher chance in belonging to moving objects. Therefore, we apply the Clausius entropy theory to convert the pixel value in an image domain into the amount of energy change in an entropy domain. Second, we use an adaptive background subtraction method to detect moving objects. This models entropy variations from backgrounds as a mixture of Gaussians. Experiment results demonstrate that our method can detect motion object effectively and reliably.

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.

Development of Robot Simulator for Palletizing Operation Management S/W and Fast Algorithm for 'PLP' (PLP 를 위한 Fast Algorithm 과 팔레타이징 작업 제어 S/W 를 위한 로봇 시뮬레이터 개발)

  • Lim, Sung-Jin;Kang, Maing-Kyu;Han, Chang-Soo;Song, Young-Hoon;Kim, Sung-Rak;Han, Jeong-Su;Yu, Seung-Nam
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.5
    • /
    • pp.609-616
    • /
    • 2007
  • Palletizing is necessary to promote the efficiency of storage and shipping tasks. These are, however some of the most monotonous, heavy and laborious tasks in the factory. Therefore many types of robot palletizing systems have been developed, but many robot motion commands still depend on the teaching pendent. That is, an operator inputs the motion command lines one by one. It is very troublesome, and most of all, the user must know how to type the code. That is why we propose a new GUI (Graphic User Interface) Palletizing System. To cope with this issue, we proposed a 'PLP' (Pallet Loading Problem) algorithm, Fast Algorithm and realize 3D auto-patterning visualization interface. Finally, we propose the robot palletizing simulator. Internally, the schematic of this simulator is as follows. First, an user inputs the physical information of object. Second, simulator calculates the optimal pattern for the object and visualizes the result. Finally, the calculated position data of object is passed to the robot simulator. To develop the robot simulator, we use an articulated robot, and analyze the kinematics and dynamics. Especially, All problem including thousands of boxes were completely calculated in less than 1 second and resulted in optimal solutions by the Fast Algorithm.

Improved changed region detection and motion estimation for object-oriented coding (객체기반 부호화에서의 개선된 움직임 영역 추출 및 추정 기법)

  • 정의윤;박영식;송근원;한규필;하영호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.9
    • /
    • pp.2043-2052
    • /
    • 1997
  • The object-oriented coding technique which is one of the coding methods in very low bit rate environment is suitable for videophone image sequence. The selection of source model affect image analysis. In this paper, an image analysis method for the object-oriented coding is presented. The process is composed of changed region detection andmotion estimateion. First, we use the standard deviation of frame difference as thrreshold to extract themoving area. If thesum of gray values in mask is greater than the threshold, the center pixel of the mask is regarded as moving region. After moving is detected in changed region by edge operator, observation point is determined from moving region. The motion is estimated by 6-parameter mapping method with determined observation point. The experimantal resutls show that the proposed method can significantly improve the image quality.

  • PDF

A Mode Selection Algorithm using Scene Segmentation for Multi-view Video Coding (객체 분할 기법을 이용한 다시점 영상 부호화에서의 예측 모드 선택 기법)

  • Lee, Seo-Young;Shin, Kwang-Mu;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.36 no.3
    • /
    • pp.198-203
    • /
    • 2009
  • With the growing demand for multimedia services and advances in display technology, new applications for 3$\sim$D scene communication have emerged. While multi-view video of these emerging applications may provide users with more realistic scene experience, drastic increase in the bandwidth is a major problem to solve. In this paper, we propose a fast prediction mode decision algorithm which can significantly reduce complexity and time consumption of the encoding process. This is based on the object segmentation, which can effectively identify the fast moving foreground object. As the foreground object with fast motion is more likely to be encoded in the view directional prediction mode, we can properly limit the motion compensated coding for a case in point. As a result, time savings of the proposed algorithm was up to average 45% without much loss in the quality of the image sequence.

Analysis of Particle Motion in Quadrupole Dielectrophoretic Trap with Emphasis on Its Dynamics Properties (사중극자 유전영동 트랩에서의 입자의 동특성에 관한 연구)

  • Chandrasekaran, Nichith;Yi, Eunhui;Park, Jae Hyun
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.38 no.10
    • /
    • pp.845-851
    • /
    • 2014
  • Dielectrophoresis (DEP) is defined as the motion of suspended particles in solvent resulting from polarization forces induced by an inhomogeneous electric field. DEP has been utilized for various biological applications such as trapping, sorting, separation of cells, viruses, nanoparticles. However, the analysis of DEP trapping has mostly employed the period-averaged ponderomotive forces while the dynamic features of DEP trapping have not been attracted because the target object is relatively large. Such approach is not appropriate for the nanoscale analysis in which the size of object is considerably small. In this study, we thoroughly investigate the dynamic response of trapping to various system parameters and its influence on the trapping stability. The effects of particle conductivity on its motion are also focused.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

A Study on Frame Interpolation and Nonlinear Moving Vector Estimation Using GRNN (GRNN 알고리즘을 이용한 비선형적 움직임 벡터 추정 및 프레임 보간연구)

  • Lee, Seung-Joo;Bang, Min-Suk;Yun, Kee-Bang;Kim, Ki-Doo
    • Journal of IKEEE
    • /
    • v.17 no.4
    • /
    • pp.459-468
    • /
    • 2013
  • Under nonlinear characteristics of frames, we propose the frame interpolation using GRNN to enhance the visual picture quality. By full search with block size of 128x128~1x1 to reduce blocky artifact and image overlay, we select the frame having block of minimum error and re-estimate the nonlinear moving vector using GRNN. We compare our scheme with forward(backward) motion compensation, bidirectional motion compensation when the object movement is large or the object image includes zoom-in and zoom-out or camera focus has changed. Experimental results show that the proposed method provides better performance in subjective image quality compared to conventional MCFI methods.

Perception of heading direction in dynamic random-dot and real-image motions (역동적인 무선점 및 실제영상 운동에서 관찰자의 진행 방향 지각)

  • 오창영;정찬섭;김정훈
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.3
    • /
    • pp.67-75
    • /
    • 1999
  • We investigated whether human could perceive the heading direction from the optic flow made from random dots and real images simulating the motion of the observer and objects. When an object moves across the focus of expansion(FOE) in random dot simulation. the observer perceived the focus of expansion biased toward the motion direction of the object. supporting the hypothesis that the direction repulsion is produced between the expansional and the horizontal planar motion components. With real image display observers tended to perceive one's heading direction biased toward the c center of the scene regardless of the direction and position of moving 0bcts. And it was observed that the deeper the depth of the background was the larger the judgment error was. These results suggest it is more likely that human depends on different cues than the optic flow when they perceive or judge one's heading direction in the real environment.

  • PDF

3D Motion of Objects in an Image Using Vanishing Points (소실점을 이용한 2차원 영상의 물체 변환)

  • 김대원;이동훈;정순기
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.621-628
    • /
    • 2003
  • This paper addresses a method of enabling objects in an image to have apparent 3D motion. Many researchers have solved this issue by reconstructing 3D model from several images using image-based modeling techniques, or building a cube-modeled scene from camera calibration using vanishing points. This paper, however, presents the possibility of image-based motion without exact 3D information of scene geometry and camera calibration. The proposed system considers the image plane as a projective plane with respect to a view point and models a 2D frame of a projected 3D object using only lines and points. And a modeled frame refers to its vanishing points as local coordinates when it is transformed.