• Title/Summary/Keyword: 3D motion estimation

Search Result 221, Processing Time 0.101 seconds

3D Reenactment System of Soccer Game (3차원 축구 재연 시스템)

  • 이재호;김진우;김희정
    • Journal of Broadcast Engineering
    • /
    • v.8 no.1
    • /
    • pp.54-62
    • /
    • 2003
  • This paper presents a Soccer Game 3D Reencatment System which reenact the Important scene like getting a goal with image processing and computer graphics technologies. KBS Research Institute of Technology has developed the 3D Reenactment System of Soccer Game called ‘VPlay' to provide TV viewers with fresh images in soccer games. Vplay generates the reenactment of exciting and important soccer scenes by using computer graphics. Vplay extracts legion of players from video with color information, and then computes precise positions of players on the ground by using global motion estimation model and playground axis transformation model. The results are applied to locomotion generation module that generates the locomotion of virtual characters automatically. Using predefined motion and model library, Vplay reenacts the important scene in a quick and convenient manner Vplay was developed for live broadcasting of soccer games that demands rapid producing time and was used efficiently during past WorldCup and Asian Game.

3D Facial Model Expression Creation with Head Motion (얼굴 움직임이 결합된 3차원 얼굴 모델의 표정 생성)

  • Kwon, Oh-Ryun;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1012-1018
    • /
    • 2007
  • 본 논문에서는 비전 기반 3차원 얼굴 모델의 자동 표정 생성 시스템을 제안한다. 기존의 3차원 얼굴 애니메이션에 관한 연구는 얼굴의 움직임을 나타내는 모션 추정을 배제한 얼굴 표정 생성에 초점을 맞추고 있으며 얼굴 모션 추정과 표정 제어에 관한 연구는 독립적으로 이루어지고 있다. 제안하는 얼굴 모델의 표정 생성 시스템은 크게 얼굴 검출, 얼굴 모션 추정, 표정 제어로 구성되어 있다. 얼굴 검출 방법으로는 얼굴 후보 영역 검출과 얼굴 영역 검출 과정으로 구성된다. HT 컬러 모델을 이용하며 얼굴의 후보 영역을 검출하며 얼굴 후보 영역으로부터 PCA 변환과 템플릿 매칭을 통해 얼굴 영역을 검출하게 된다. 검출된 얼굴 영역으로부터 얼굴 모션 추정과 얼굴 표정 제어를 수행한다. 3차원 실린더 모델의 투영과 LK 알고리즘을 이용하여 얼굴의 모션을 추정하며 추정된 결과를 3차원 얼굴 모델에 적용한다. 또한 영상 보정을 통해 강인한 모션 추정을 할 수 있다. 얼굴 모델의 표정을 생성하기 위해 특징점 기반의 얼굴 모델 표정 생성 방법을 적용하며 12개의 얼굴 특징점으로부터 얼굴 모델의 표정을 생성한다. 얼굴의 구조적 정보와 템플릿 매칭을 이용하여 눈썹, 눈, 입 주위의 얼굴 특징점을 검출하며 LK 알고리즘을 이용하여 특징점을 추적(Tracking)한다. 추적된 특징점의 위치는 얼굴의 모션 정보와 표정 정보의 조합으로 이루어져있기 때문에 기하학적 변환을 이용하여 얼굴의 방향이 정면이었을 경우의 특징점의 변위인 애니메이션 매개변수를 획득한다. 애니메이션 매개변수로부터 얼굴 모델의 제어점을 이동시키며 주위의 정점들은 RBF 보간법을 통해 변형한다. 변형된 얼굴 모델로부터 얼굴 표정을 생성하며 모션 추정 결과를 모델에 적용함으로써 얼굴 모션 정보가 결합된 3차원 얼굴 모델의 표정을 생성한다.

  • PDF

Capture of Foot Motion for Real-time Virtual Wearing by Stereo Cameras (스테레오 카메라로부터 실시간 가상 착용을 위한 발동작 검출)

  • Jung, Da-Un;Yun, Yong-In;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1575-1591
    • /
    • 2008
  • In this paper, we propose a new method detecting foot motion capture in order to overlap in realtime foot's 3D virtual model from stereo cameras. In order to overlap foot's virtual model at the same position of the foot, a process of the foot's joint detection to regularly track the foot's joint motion is necessary, and accurate register both foot's virtual model and user's foot in complicated motion is most important problem in this technology. In this paper, we propose a dynamic registration using two types of marker groups. A plane information of the ground handles the relationship between foot's virtual model and user's foot and obtains foot's pose and location. Foot's rotation is predicted by two attached marker groups according to instep of center framework. Consequently, we had implemented our proposed system and estimated the accuracy of the proposed method using various experiments.

  • PDF

Stereo Object Tracking and Multiview image Reconstruction System Using Disparity Motion Vector (시차 움직임 벡터에 기반한 스데레오 물체추적 및 다시점 영상복원 시스템)

  • Ko Jung-Hwan;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.2C
    • /
    • pp.166-174
    • /
    • 2006
  • In this paper, a new stereo object tracking system using the disparity motion vector is proposed. In the proposed method, the time-sequential disparity motion vector can be estimated from the disparity vectors which are extracted from the sequence of the stereo input image pair and then using these disparity motion vectors, the area where the target object is located and its location coordinate are detected from the input stereo image. Being based on this location data of the target object, the pan/tilt embedded in the stereo camera system can be controlled and as a result, stereo tracking of the target object can be possible. From some experiments with the 2 frames of the stereo image pairs having 256$\times$256 pixels, it is shown that the proposed stereo tracking system can adaptively track the target object with a low error ratio of about 3.05$\%$ on average between the detected and actual location coordinates of the target object.

Crustal Deformation Velocities Estimated from GPS and Comparison of Plate Motion Models (GPS로 추정한 지각변동 속도 및 판 거동 모델과의 비교)

  • Song, Dong Seob;Yun, Hong Sic
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5D
    • /
    • pp.877-884
    • /
    • 2006
  • GPS is an essential tool for applications that be required high positioning precision, for the velocity field estimation of tectonic plates. The three years data of eight GPS permanent station were analyzed to estimate crustal deformation velocities using Gipsy-oasis II software. The velocity vectors of GPS stations are estimated by linear regression method in daily solution time series. The velocities have a standard deviation of less than 0.1mm/yr and the magnitude of velocities given by the Korean GPS permanent stations were very small, ranging from 25.1 to 31.1 mm/yr. The comparison between the final solution and other sources, such as IGS velocity result calculated from SOPAC was accomplished and the results generally show good agreement for magnitude and direction in crustal motion. To evaluate the accuracy of our results, the velocities obtained from six plate motion model was compared with the final solution based on GPS observation.

Estimation of Internal Motion for Quantitative Improvement of Lung Tumor in Small Animal (소동물 폐종양의 정량적 개선을 위한 내부 움직임 평가)

  • Yu, Jung-Woo;Woo, Sang-Keun;Lee, Yong-Jin;Kim, Kyeong-Min;Kim, Jin-Su;Lee, Kyo-Chul;Park, Sang-Jun;Yu, Ran-Ji;Kang, Joo-Hyun;Ji, Young-Hoon;Chung, Yong-Hyun;Kim, Byung-Il;Lim, Sang-Moo
    • Progress in Medical Physics
    • /
    • v.22 no.3
    • /
    • pp.140-147
    • /
    • 2011
  • The purpose of this study was to estimate internal motion using molecular sieve for quantitative improvement of lung tumor and to localize lung tumor in the small animal PET image by evaluated data. Internal motion has been demonstrated in small animal lung region by molecular sieve contained radioactive substance. Molecular sieve for internal lung motion target was contained approximately 37 kBq Cu-64. The small animal PET images were obtained from Siemens Inveon scanner using external trigger system (BioVet). SD-Rat PET images were obtained at 60 min post injection of FDG 37 MBq/0.2 mL via tail vein for 20 min. Each line of response in the list-mode data was converted to sinogram gated frames (2~16 bin) by trigger signal obtained from BioVet. The sinogram data was reconstructed using OSEM 2D with 4 iterations. PET images were evaluated with count, SNR, FWHM from ROI drawn in the target region for quantitative tumor analysis. The size of molecular sieve motion target was $1.59{\times}2.50mm$. The reference motion target FWHM of vertical and horizontal was 2.91 mm and 1.43 mm, respectively. The vertical FWHM of static, 4 bin and 8 bin was 3.90 mm, 3.74 mm, and 3.16 mm, respectively. The horizontal FWHM of static, 4 bin and 8 bin was 2.21 mm, 2.06 mm, and 1.60 mm, respectively. Count of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.10, 4.83, 5.59, 5.38, and 5.31, respectively. The SNR of static, 4 bin, 8 bin, 12 bin and 16 bin was 4.18, 4.05, 4.22, 3.89, and 3.58, respectively. The FWHM were improved in accordance with gate number increase. The count and SNR were not proportionately improve with gate number, but shown the highest value in specific bin number. We measured the optimal gate number what minimize the SNR loss and gain improved count when imaging lung tumor in small animal. The internal motion estimation provide localized tumor image and will be a useful method for organ motion prediction modeling without external motion monitoring system.

Head Pose Estimation by using Morphological Property of Disparity Map

  • Jun, Se-Woong;Park, Sung-Kee;Lee, Moon-Key
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.735-739
    • /
    • 2005
  • This paper presents a new system to estimate the head pose of human in interactive indoor environment that has dynamic illumination change and large working space. The main idea of this system is to suggest a new morphological feature for estimating head angle from stereo disparity map. When a disparity map is obtained from stereo camera, the matching confidence value can be derived by measurements of correlation of the stereo images. Applying a threshold to the confidence value, we also obtain the specific morphology of the disparity map. Therefore, we can obtain the morphological shape of disparity map. Through the analysis of this morphological property, the head pose can be estimated. It is simple and fast algorithm in comparison with other algorithm which apply facial template, 2D, 3D models and optical flow method. Our system can automatically segment and estimate head pose in a wide range of head motion without manual initialization like other optical flow system. As the result of experiments, we obtained the reliable head orientation data under the real-time performance.

  • PDF

Space-Time Quantization and Motion-Aligned Reconstruction for Block-Based Compressive Video Sensing

  • Li, Ran;Liu, Hongbing;He, Wei;Ma, Xingpo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.321-340
    • /
    • 2016
  • The Compressive Video Sensing (CVS) is a useful technology for wireless systems requiring simple encoders but handling more complex decoders, and its rate-distortion performance is highly affected by the quantization of measurements and reconstruction of video frame, which motivates us to presents the Space-Time Quantization (ST-Q) and Motion-Aligned Reconstruction (MA-R) in this paper to both improve the performance of CVS system. The ST-Q removes the space-time redundancy in the measurement vector to reduce the amount of bits required to encode the video frame, and it also guarantees a low quantization error due to the fact that the high frequency of small values close to zero in the predictive residuals limits the intensity of quantizing noise. The MA-R constructs the Multi-Hypothesis (MH) matrix by selecting the temporal neighbors along the motion trajectory of current to-be-reconstructed block to improve the accuracy of prediction, and besides it reduces the computational complexity of motion estimation by the extraction of static area and 3-D Recursive Search (3DRS). Extensive experiments validate that the significant improvements is achieved by ST-Q in the rate-distortion as compared with the existing quantization methods, and the MA-R improves both the objective and the subjective quality of the reconstructed video frame. Combined with ST-Q and MA-R, the CVS system obtains a significant rate-distortion performance gain when compared with the existing CS-based video codecs.

Real-time Implementation and Application of Pointing Region Estimation System using 3D Geometric Information in Real World (실세계 3차원 기하학 정보를 이용한 실시간 지시영역 추정 시스템의 구현 및 응용)

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Kim, Jin-Tae;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.29-36
    • /
    • 2008
  • In this paper we propose a real-time method to estimate a pointing region from two camera images. In general, a pointing target exists in the face direction when a human points to something. Therefore, we regard the direction of pointing as the straight line that connects the face position with the fingertip position. First, the method extracts two points in the face and the fingertips region by using detecting the skin color of human being. And we used the 3D geometric information to obtain a pointing detection and its region. In order to evaluate the performance, we have build up an ICIGS(Interactive Cinema Information Guiding System) with two camera and a beam project.

Correlation Between Knee Muscle Strength and Maximal Cycling Speed Measured Using 3D Depth Camera in Virtual Reality Environment

  • Kim, Ye Jin;Jeon, Hye-seon;Park, Joo-hee;Moon, Gyeong-Ah;Wang, Yixin
    • Physical Therapy Korea
    • /
    • v.29 no.4
    • /
    • pp.262-268
    • /
    • 2022
  • Background: Virtual reality (VR) programs based on motion capture camera are the most convenient and cost-effective approaches for remote rehabilitation. Assessment of physical function is critical for providing optimal VR rehabilitation training; however, direct muscle strength measurement using camera-based kinematic data is impracticable. Therefore, it is necessary to develop a method to indirectly estimate the muscle strength of users from the value obtained using a motion capture camera. Objects: The purpose of this study was to determine whether the pedaling speed converted using the VR engine from the captured foot position data in the VR environment can be used as an indirect way to evaluate knee muscle strength, and to investigate the validity and reliability of a camera-based VR program. Methods: Thirty healthy adults were included in this study. Each subject performed a 15-second maximum pedaling test in the VR and built-in speedometer modes. In the VR speedometer mode, a motion capture camera was used to detect the position of the ankle joints and automatically calculate the pedaling speed. An isokinetic dynamometer was used to assess the isometric and isokinetic peak torques of knee flexion and extension. Results: The pedaling speeds in VR and built-in speedometer modes revealed a significantly high positive correlation (r = 0.922). In addition, the intra-rater reliability of the pedaling speed in the VR speedometer mode was good (ICC [intraclass correlation coefficient] = 0.685). The results of the Pearson correlation analysis revealed a significant moderate positive correlation between the pedaling speed of the VR speedometer and the peak torque of knee isokinetic flexion (r = 0.639) and extension (r = 0.598). Conclusion: This study suggests the potential benefits of measuring the maximum pedaling speed using 3D depth camera in a VR environment as an indirect assessment of muscle strength. However, technological improvements must be followed to obtain more accurate estimation of muscle strength from the VR cycling test.