• Title/Summary/Keyword: Zoom Motion

Search Result 54, Processing Time 0.022 seconds

A HIGH PRECISION CAMERA OPERATING PARAMETER MEASUREMENT SYSTEM AND ITS APPLICATION TO IMAGE MOTION INFERRING

  • Wentao-Zheng;Yoshiaki-Shishikui;Yasuaki-Kanatsugu;Yutaka-Tanaka
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.77-82
    • /
    • 1999
  • Information about camera operating such as zoom, focus, pan, tilt and tracking is useful not only for efficient video coding, but also for content-based video representation. A camera operating parameter measurement system designed specifically for these applications is therefore developed. This system, implemented in real time and synchronized with the video signal, measures the precise camera operating parameters. We calibrated the camera lens using a camera model that accounts for redial lens distortion. The system is then applied to infer image motion from pan and tilt operating parameters. The experimental results show that the inferred motion coincides with the actual motion very well, with an error of less than 0.5 pixel even for large motion up to 80 pixels.

Effective Detection Techniques for Gradual Scene Changes on MPEG Video (MPEG 영상에서의 점진적 장면전환에 대한 효과적인 검출 기법)

  • 윤석중;지은석;김영로;고성제
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.8B
    • /
    • pp.1577-1585
    • /
    • 1999
  • In this paper, we propose detection methods for gradual scene changes such as dissolve, pan, and zoom. The proposal method to detect a dissolve region uses scene features based on spatial statistics of the image. The spatial statistics to define shot boundaries are derived from squared means within each local area. We also propose a method of the camera motion detection using four representative motion vectors in the background. Representative motion vectors are derived from macroblock motion vectors which are directly extracted from MPEG streams. To reduce the implementation time, we use DC sequences rather than fully decoded MPEG video. In addition, to detect the gradual scene change region precisely, we use all types of the MPEG frames(I, P, B frame). Simulation results show that the proposed detection methods perform better than existing methods.

  • PDF

Design of a Low-Vibration Micro-Stepping Controller for Pan-Tilt Camera (팬.틸트 카메라의 저 진동 마이크로스텝핑 제어기 설계)

  • Yoo, Jong-won;Kim, Jung-han
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.27 no.9
    • /
    • pp.43-51
    • /
    • 2010
  • Speed, accuracy and smoothness are the important properties of pan-tilt camera. In the case of a high ratio zoom lens system, low vibration characteristic is a crucial point in driving pan-tilt mechanism. In this paper, a novel micro-stepping controller with a function of reducing vibration was designed using field programmable gate arrays (FPGA) technology for high zoom ratio pan-tilt camera. The proposed variable reference current (VRC) control scheme reduces vibration decently and optimizing coil current in order to prevent the step motor from occurring missing steps. By employing VRC control scheme, the vibration in low speed could be significantly minimized. The proposed controller can also make very high speed of 378kpps micro-step driving, and increase maximum acceleration in motion profiles.

Block Toeplitz Matrix Inversion using Levinson Polynomials

  • Lee, Won-Cheol;Nam, Jong-Gil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.8B
    • /
    • pp.1438-1443
    • /
    • 1999
  • In this paper, we propose detection methods for gradual scene changes such as dissolve, pan, and zoom. The proposal method to detect a dissolve region uses scene features based on spatial statistics of the image. The spatial statistics to define shot boundaries are derived from squared means within each local area. We also propose a method of the camera motion detection using four representative motion vectors in the background. Representative motion vectors are derived from macroblock motion vectors which are directly extracted from MPEG streams. To reduce the implementation time, we use DC sequences rather than fully decoded MPEG video. In addition, to detect the gradual scene change region precisely, we use all types of the MPEG frames(I, P, B frame). Simulation results show that the proposed detection methods perform better than existing methods.

  • PDF

Tracking and Interpretation of Moving Object in MPEG-2 Compressed Domain (MPEG-2 압축 영역에서 움직이는 객체의 추적 및 해석)

  • Mun, Su-Jeong;Ryu, Woon-Young;Kim, Joon-Cheol;Lee, Joon-Hoan
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.27-34
    • /
    • 2004
  • This paper proposes a method to trace and interpret a moving object based on the information which can be directly obtained from MPEG-2 compressed video stream without decoding process. In the proposed method, the motion flow is constructed from the motion vectors included in compressed video. We calculate the amount of pan, tilt, and zoom associated with camera operations using generalized Hough transform. The local object motion can be extracted from the motion flow after the compensation with the parameters related to the global camera motion. Initially, a moving object to be traced is designated by user via bounding box. After then automatic tracking Is performed based on the accumulated motion flows according to the area contributions. Also, in order to reduce the cumulative tracking error, the object area is reshaped in the first I-frame of a GOP by matching the DCT coefficients. The proposed method can improve the computation speed because the information can be directly obtained from the MPEG-2 compressed video, but the object boundary is limited by macro-blocks rather than pixels. Also, the proposed method is proper for approximate object tracking rather than accurate tracing of an object because of limited information available in the compressed video data.

Analysis of Camera Operation in MPEG2 Compressed Domain Using Generalized Hough Transform Technique (일반화된 Hough 변환기법을 이용한 MPEG2 압축영역에서의 카메라의 움직임 해석)

  • Yoo, Won-Young;Choi, Jeong-Il;Lee, Joon-Whoan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3566-3575
    • /
    • 2000
  • In this paper, we propose an simple and efficient method to estunate the camera operation by using compressed information, which is extracted diracily from MPEG2 stream without complete decoding. In the method, the motion vector is converted into approximate optical flow by using the feature of predicted frame, because the motion vector in MPEG2 video stream is not regular sequene. And they are used to estimate the camera operation, which consist of pan, and zoom by Hough transform technique. The method provided better results than the least square method for video stream of basketball and socer games. The proposed method can have a reduced computational complexity because the information is directiv abtained in compressed domain. Additionally it can be a useful technology in content-based searching and analysis of video information. Also, the estimatd cameral operationis applicable in searching or tracking objects in MPEG2 video stream without decoding.

  • PDF

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Design and Implementation of a 3 DOF Robotic Lamp (3자유도 조명로봇 설계 및 구현)

  • Lee, Yun-Seok;Seo, Jong-Tae;Kim, Whee-Kuk;Yi, Byung-Ju
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.3
    • /
    • pp.216-223
    • /
    • 2010
  • Most lamp units at ceilings, walls, and streets are static and no automatic motion capabilities are available at all to adjust lamp tilting angles and its zooming position. This paper proposes a new robotic lamp that creates three degrees of freedom (DOF) motion by using a spherical-type parallel mechanism with a unique forward kinematic position. In the robotic lamp, three motors are placed at the base frame to control two tilting angles and one zoom in-and-out motion for a localized light. The kinematic model of this device is derived and the proto type has been developed. The performance of this device was verified through experiment.

Bidirectional Motion of the Windmill Type Ultrasonic Linear Motor (풍차형 초음파 선형 모터의 양방향 운동)

  • 이재형;박태곤;정영호
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.16 no.6
    • /
    • pp.484-489
    • /
    • 2003
  • In this paper, a single phase driven piezoelectric motor design was presented for linear motion Two metal/ceramic composite actuators, a piezoelectric ring which was bonded to a metal endcap from one side, were used as the active elements of this motor. The motor was composed of a piezoelectric ceramic, a metal ring which has 4 arms, and a guider. Motors with 30 [mm] and 35 [mm] diameter were studied by finite element analysis and experiments. As results, the maximum speed of motor was obtained at resonance frequency. When the applied voltage of the motor increased, the speed was increased. Also, bidirectional motion of the motor was achieved by combining two motors which have different resonance frequency. But the characteristics of bidirectional motion were not equaled, because of the problem of reproduction on the fabrication and the experiment. If present motor is used at the auto-zoom device of a camera, it will have much advantage. Because the direct linear motion can be achieved with a simple structure of motor and no gearbox of total system.

Fire Detection using Color and Motion Models

  • Lee, Dae-Hyun;Lee, Sang Hwa;Byun, Taeuk;Cho, Nam Ik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.237-245
    • /
    • 2017
  • This paper presents a fire detection algorithm using color and motion models from video sequences. The proposed method detects change in color and motion of overall regions for detecting fire, and thus, it can be implemented in both fixed and pan/tilt/zoom (PTZ) cameras. The proposed algorithm consists of three parts. The first part exploits color models of flames and smoke. The candidate regions in the video frames are extracted with the hue-saturation-value (HSV) color model. The second part models the motion information of flames and smoke. Optical flow in the fire candidate region is estimated, and the spatial-temporal distribution of optical flow vectors is analyzed. The final part accumulates the probability of fire in successive video frames, which reduces false-positive errors when fire-like color objects appear. Experimental results from 100 fire videos are shown, where various types of smoke and flames appear in indoor and outdoor environments. According to the experiments and the comparison, the proposed fire detection algorithm works well in various situations, and outperforms the conventional algorithms.