• Title/Summary/Keyword: Motion segmentation

Search Result 203, Processing Time 0.027 seconds

An efficient method for segmentation of fast motion video (움직임이 큰 비디오에 효율적인 비디오 분할 방법)

  • Park, Min-Ho;Park, Rae-Hong
    • Annual Conference of KIPS
    • /
    • 2005.05a
    • /
    • pp.181-184
    • /
    • 2005
  • 기존의 비디오 분할 방법은 밝기의 변화가 큰 영상이나 움직임이 큰 영상에 대해서는 정확한 분할이 이루어지지 않았다. 본 논문은 움직임 정보를 이용하여 움직임이 큰 영상에서 좀 더 정확하게 비디오를 분할할 수 있는 방법을 제안한다. 이를 위해 블록 정합 알고리즘을 이용하여 얻어진 움직임 벡터로부터 움직임 유사도를 찾는 방법을 제안한다. 또 연속된 프레임에서 픽셀의 차이 값을 계산할 때 motion blur 로 생기는 오차를 각 블록의 움직임 크기로 보상하여 좀 더 정확한 픽셀의 차이 값을 계산하는 방법을 제안한다. 이렇게 얻어진 두 가지 정보를 이용하여 discontinuity value 를 계산한다. 움직임이 많은 액션 영화 3 편에 대해 실험한 결과 제안한 방법이 기존의 움직임 유사도와 픽셀 차이 값을 구하여 샷 경계 검출을 하는 방법보다 좀 더 정확한 샷 경계 검출을 하고 있다는 것을 보여준다.

  • PDF

A study on the moving image segmentation (Moving image segrnentation에 관한 연구)

  • Lee, Won-Hee;Byun, Cha-Eung;Kim, Jae-Young;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1347-1349
    • /
    • 1996
  • Most real image sequences contain multiple moving objects or multiple motions. In this paper, we segmented the moving objects with optical flow. Motion estimation by this method can estimate and compress the image sequences better than other methods such as block matching method. And, especially, we can make new image sequences by synthesizing the segmented objects. But, it takes too much time for motion estimation. And, it is not easy for a hardware implementation.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF

Acoustic Characteristics of 'Short Rushes of Speech' using Alternate Motion Rates in Patients with Parkinson's Disease (파킨슨병 환자의 교대운동속도 과제에서 관찰된 '말 뭉침'의 음향학적 특성)

  • Kim, Sun Woo;Yoon, Ji Hye;Lee, Seung Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.55-62
    • /
    • 2015
  • It is widely accepted that Parkinson's disease(PD) is the most common cause of hypokinetic dysarthria, and its characteristics of 'short rushes of speech' have become more evident along with the severity of motor disorders. Speech alternate motion rates (AMRs) are particularly useful for observing not only rate abnormalities but also deviant speech. However, relatively little is known about the characteristics of 'short rushes of speech' in terms of AMRs of PD except for the perceptual characteristics. The purpose of this study was to examine which acoustic features of 'short rushes of speech' in terms of AMRs are a robust indicator of Parkinsonian speech. Numbers of syllabic repetitions (/pə/, /tə/, /kə/) in AMR tasks were analyzed through acoustic methods observing a spectrogram of the Computerized Speech Lab in 9 patients with PD. Acoustically, we found three characteristics of 'short rushes of speech': 1) Vocalized consonants without closure duration(VC) 76.3%; 2) No consonant segmentation(NC) 18.6%; 3) No vowel formant frequency(NV) 5.1%. Based on these results, 'short rushes of speech' may affect the failure to reach and maintain the phonatory targets. In order to best achieve the therapeutic goals, and to make the treatment most efficacious, it is important to incorporate training methods which are based on both phonation and articulation.

Real-time 3D multi-pedestrian detection and tracking using 3D LiDAR point cloud for mobile robot

  • Ki-In Na;Byungjae Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.836-846
    • /
    • 2023
  • Mobile robots are used in modern life; however, object recognition is still insufficient to realize robot navigation in crowded environments. Mobile robots must rapidly and accurately recognize the movements and shapes of pedestrians to navigate safely in pedestrian-rich spaces. This study proposes real-time, accurate, three-dimensional (3D) multi-pedestrian detection and tracking using a 3D light detection and ranging (LiDAR) point cloud in crowded environments. The pedestrian detection quickly segments a sparse 3D point cloud into individual pedestrians using a lightweight convolutional autoencoder and connected-component algorithm. The multi-pedestrian tracking identifies the same pedestrians considering motion and appearance cues in continuing frames. In addition, it estimates pedestrians' dynamic movements with various patterns by adaptively mixing heterogeneous motion models. We evaluate the computational speed and accuracy of each module using the KITTI dataset. We demonstrate that our integrated system, which rapidly and accurately recognizes pedestrian movement and appearance using a sparse 3D LiDAR, is applicable for robot navigation in crowded spaces.

A Mode Selection Algorithm using Scene Segmentation for Multi-view Video Coding (객체 분할 기법을 이용한 다시점 영상 부호화에서의 예측 모드 선택 기법)

  • Lee, Seo-Young;Shin, Kwang-Mu;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.36 no.3
    • /
    • pp.198-203
    • /
    • 2009
  • With the growing demand for multimedia services and advances in display technology, new applications for 3$\sim$D scene communication have emerged. While multi-view video of these emerging applications may provide users with more realistic scene experience, drastic increase in the bandwidth is a major problem to solve. In this paper, we propose a fast prediction mode decision algorithm which can significantly reduce complexity and time consumption of the encoding process. This is based on the object segmentation, which can effectively identify the fast moving foreground object. As the foreground object with fast motion is more likely to be encoded in the view directional prediction mode, we can properly limit the motion compensated coding for a case in point. As a result, time savings of the proposed algorithm was up to average 45% without much loss in the quality of the image sequence.

Frame-rate Up-conversion using Hierarchical Adaptive Search and Bi-directional Motion Estimation (계층적 적응적 탐색과 양방향 움직임 예측을 이용한 프레임율 증가 방법)

  • Min, Kyung-Yeon;Park, Sea-Nae;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.28-36
    • /
    • 2009
  • In this paper, we propose a frame-rate up-conversion method for temporal quality enhancement. The proposed method adaptively changes search range during hierarchical motion estimation and reconstructs hole regions using the proposed bi-direction prediction and linear interpolation. In order to alleviate errors due to inaccurate motion vector estimation, search range is adaptively changed based on reliability and for more accurate, motion estimation is performed in descending order of block variance. After segmentation of background and object regions, for filling hole regions, the pixel values of background regions are reconstructed using linear interpolation and those of object regions are compensated based on the proposed hi-directional prediction. The proposed algorithm is evaluated in terms of PSNR with original uncompressed sequences. Experimental results show that the proposed algorithm is better than conventional methods by around 2dB, and blocky artifacts and blur artifacts are significantly diminished.

Hardware Implementation of a Fast Inter Prediction Engine for MPEG-4 AVC (MPEG-4 AVC를 위한 고속 인터 예측기의 하드웨어 구현)

  • Lim Young hun;Lee Dae joon;Jeong Yong jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.102-111
    • /
    • 2005
  • In this paper, we propose an advanced hardware architecture for the fast inter prediction engine of the video coding standard MPEG-4 AVC. We describe the algorithm and derive the hardware architecture emphasizing and real time operation of the quarter_pel based motion estimation. The fast inter prediction engine is composed of block segmentation, motion estimation, motion compensation, and the fast quarter_pel calculator. The proposed architecture has been verified by ARM-interfaced emulation board using Excalibur & Virtex2 FPGA, and also by synthesis on Samsung 0.18 um CMOS technology. The synthesis result shows that the proposed hardware can operate at 62.5MHz. In this case, it can process about 88 QCIF video frames per second. The hardware is being used as a core module when implementing a complete MPEG-4 AVC video encoder chip for real-time multimedia application.

Robust object tracking using projected motion and histogram intersection (투영된 모션과 히스토그램 인터섹션을 이용한 강건한 물체추적)

  • Lee, Bong-Seok;Moon, Young-Shik
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.99-104
    • /
    • 2002
  • Existing methods of object tracking use template matching, re-detection of object boundaries or motion information. The template matching method requires very long computation time. The re-detection of object boundaries may produce false edges. The method using motion information shows poor tracking performance in moving camera. In this paper, a robust object tracking algorithm is proposed, using projected motion and histogram intersection. The initial object image is constructed by selecting the regions of interest after image segmentation. From the selected object, the approximate displacement of the object is computed by using 1-dimensional intensity projection in horizontal and vortical direction. Based on the estimated displacement, various template masks are constructed for possible orientations and scales of the object. The best template is selected by using the modified histogram intersection method. The robustness of the proposed tracking algorithm has been verified by experimental results.