• Title/Summary/Keyword: Motion Technique

Search Result 2,049, Processing Time 0.032 seconds

Dragging Body Parts in 3D Space to Direct Animated Characters (3차원 공간 상의 신체 부위 드래깅을 통한 캐릭터 애니메이션 제어)

  • Lee, Kang Hoon;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.2
    • /
    • pp.11-20
    • /
    • 2015
  • We present a new interactive technique for directing the motion sequences of an animated character by dragging its specific body part to a desired location in the three-dimensional virtual environment via a hand motion tracking device. The motion sequences of our character is synthesized by reordering subsequences of captured motion data based on a well-known graph representation. For each new input location, our system samples the space of possible future states by unrolling the graph into a spatial search tree, and retrieves one of the states at which the dragged body part of the character gets closer to the input location. We minimize the difference between each pair of successively retrieved states, so that the user is able to anticipate which states will be found by varying the input location, and resultantly, to quickly reach the desired states. The usefulness of our method is demonstrated through experiments with breakdance, boxing, and basketball motion data.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Parametric Imaging with Respiratory Motion Correction for Contrast-Enhanced Ultrasonography (조영증강 초음파 진단에서 호흡에 의한 흔들림을 보정한 파라미터 영상 생성 기법)

  • Kim, Ho-Joon;Cho, Yun-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.2
    • /
    • pp.69-76
    • /
    • 2020
  • In this paper, we introduce a method to visualize the contrast diffusion patterns and the dynamic vascular patterns in a contrast-enhanced ultrasound image sequence. We present an imaging technique to visualize parameters such as contrast arrival time, peak intensity time, and contrast decay time in contrast-enhanced ultrasound data. The contrast flow pattern and its velocity are important for characterizing focal liver lesions. We propose a method for representing the contrast diffusion patterns as an image. In the methods, respiratory motion may degrade the accuracy of the parametric images. Therefore, we present a respiratory motion tracking technique that uses dynamic weights and a momentum factor with respect to the respiration cycle. Through the experiment using 72 CEUS data sets, we show that the proposed method makes it possible to overcome the limitation of analysis by the naked eye and improves the reliability of the parametric images by compensating for respiratory motion in contrast-enhanced ultrasonography.

Interframe Coding of 3-D Medical Image Using Warping Prediction (Warping을 이용한 움직임 보상을 통한 3차원 의료 영상의 압축)

  • So, Yun-Sung;Cho, Hyun-Duck;Kim, Jong-Hyo;Ra, Jong-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.3
    • /
    • pp.223-231
    • /
    • 1997
  • In this paper, an interframe coding method for volumetric medical images is proposed. By treating interslice variations as the motion of bones or tissues, we use the motion compensation (MC) technique to predict the current frame from the previous frame. Instead of a block matching algorithm (BMA), which is the most common motion estimation (ME) algorithm in video coding, image warping with biolinear transformation has been suggested to predict complex interslice object variation in medical images. When an object disappears between slices, however, warping prediction has poor performance. In order to overcome this drawback, an overlapped block motion compensation (OBMC) technique is combined with carping prediction. Motion compensated residual images are then encoded by using an embedded zerotree wavelet (EZW) coder with small modification for consistent quality of reconstructed images. The experimental results show that the interframe coding suing warping prediction provides better performance compared with interframe coding, and the OBMC scheme gives some additional improvement over the warping-only MC method.

  • PDF

A Study on the Change of Waist Pattern by Upper Limb Motion -By the Method of Tight Fitting Technique- (상지동작에 따른 길의 변화에 관한 연구 -입체재단법을 중심으로-)

  • 이은정;박정순
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.20 no.1
    • /
    • pp.113-127
    • /
    • 1996
  • In this study, the pattern was copied by the method of Tight Fitting Techinque, which resulted from the changed body by the upper limb motion-front-vertical motion(or vertical motion in front), side-vertical motion, and horizontal motion. And, this study analyzed the change of the pattern and the observed items dimension changed to the pattern. The results are as follows: 1. In the observation of the degree of the pattern change according to the motion of upper limb, the result provides that the motion change in the range of $135^{\circ}$ to $180^{\circ}$ is the largest in front-vertical motion, $45^{circ}~90^{\circ}$ in side-vertical motion, and $0^{circ}~45^{\circ}$ in horizontal motion respectively. 2. The probability test result of the items of the motion is more related with the horizontal width item rather than the vertical length item in the front and back pattern where the back pattern has more effect than the front pattern. And the upper limb-surrounding items are more related than any otheer item. 3. The change of the pattern according to the motion shows the decrese of the neck width and the shoulder legth, the rising of the point of shoulder (or shoulder point) and armpit point, the decrease of the pattern width and the increase of the pattern length. As the angle of the motion grows vertically motion. The change of the shoulder length in the horizontal motion is smaller than that vertical. But as the angle of the motion grows horizontally, it has a tendency of decreas in th width of the front patten and the length of the pattern, whereas the width of the back pattern is noticeably increases.

  • PDF

Efficient Algorithms for Motion Parameter Estimation in Object-Oriented Analysis-Synthesis Coding (객체지향 분석-함성 부호화를 위한 효율적 움직임 파라미터 추정 알고리듬)

  • Lee Chang Bum;Park Rae-Hong
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.653-660
    • /
    • 2004
  • Object-oriented analysis-synthesis coding (OOASC) subdivides each image of a sequence into a number of moving objects and estimates and compensates the motion of each object. It employs a motion parameter technique for estimating motion information of each object. The motion parameter technique employing gradient operators requires a high computational load. The main objective of this paper is to present efficient motion parameter estimation techniques using the hierarchical structure in object-oriented analysis-synthesis coding. In order to achieve this goal, this paper proposes two algorithms : hybrid motion parameter estimation method (HMPEM) and adaptive motion parameter estimation method (AMPEM) using the hierarchical structure. HMPEM uses the proposed hierarchical structure, in which six or eight motion parameters are estimated by a parameter verification process in a low-resolution image, whose size is equal to one fourth of that of an original image. AMPEM uses the same hierarchical structure with the motion detection criterion that measures the amount of motion based on the temporal co-occurrence matrices for adaptive estimation of the motion parameters. This method is fast and easily implemented using parallel processing techniques. Theoretical analysis and computer simulation show that the peak signal to noise ratio (PSNR) of the image reconstructed by the proposed method lies between those of images reconstructed by the conventional 6- and 8-parameter estimation methods with a greatly reduced computational load by a factor of about four.

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.

A Content-Based Motion Adaptive DeInterlacing Technique (콘텐츠 기반 움직임 적응형 디인터레이싱 기법)

  • Kim, Min-Hwan;Lee, Chang-Woo;Lee, Seong-Won
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.791-802
    • /
    • 2010
  • Recent prevalence of progressive scan display such as LCD TV demands the quality improvement of existing deinterlacing techniques that convert interlaced scan images including HDTV broadcasting to progressive scan images. In this paper, we proposea motion adaptive deinterlacing technique which can be used for spatial methods, temporal methods, and the spatial-temporal methods can be used for the deinterlacing techniques selectively based on the threshold values calculated by the statistics of motion in the video contents. We also propose an improved spatial deinterlacing technique that adaptively use M-ELA and DOI based on the slant of edges that are obtained by Sobel operation. The improved picture quality of the proposed algorithm is confirmed by objective and subjective quality tests on many test image sequences.

Representing Human Motions in an Eigenspace Based on Surrounding Cameras

  • Houman, Satoshi;Rahman, M. Masudur;Tan, Joo Kooi;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1808-1813
    • /
    • 2004
  • Recognition of human motions using their 2-D images has various applications. An eigenspace method is employed in this paper for representing and recognizing human motions. An eigenspace is created from the images taken by multiple cameras that surround a human in motion. Image streams obtained from the cameras compose the same number of curved lines in the eigenspace and they are used for recognizing a human motion in a video image. Performance of the proposed technique is shown experimentally.

  • PDF

The Rotational Motion Stabilization Using Simple Estimation of the Rotation Center and Angle

  • Seok, Ho-Dong;Kim, Do-Jong;Lyou, Joon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.231-236
    • /
    • 2003
  • This paper presents a simple approach on the rotational motion estimation and correction for the roll stabilization of the sight system. The algorithm first computes the rotational center from the selected local velocity vectors of related pixels by least square methods. And then, rotational angle is found from the special subset of the motion vector. Finally, motion correction is performed by the nearest neighbor interpolation technique. In order to show the performance of the algorithm, the evaluation for the synthetic and real image was performed. The test results show good performance compared with previous approach.

  • PDF