• Title/Summary/Keyword: 모션 추출

Search Result 170, Processing Time 0.027 seconds

Classification of K-POP Dance Motion Using Multilinear PCA (다선형 PCA를 이용한 K-POP 댄스모션 분류)

  • Lee, Jae-Neung;Kwak, Keun-Chang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.486-487
    • /
    • 2018
  • 본 논문에서는 다선형 PCA(Principal Component Analysis)를 이용한 키넥트 센서 기반 댄스 모션분류방법을 제안한다. 댄스 모션 분류를 수행하기 위해서, 먼저 키넥트 데이터 깊이 영상과 이진영상을 보간법을 통해 데이터의 크기를 정렬시켜준다. 다음으로 다선형 주성분 분석 기법 (MPCA)을 이용하여 연속된 댄스모션영상들에 대한 특징을 추출하고, 유클리디안 분류기를 통해 클래스 분류한다. 본 실험에 사용된 데이터베이스는 키넥트 센서를 기반으로 전문 댄서 4명을 통해 취득된다. 총 100곡의 K-POP을 선정하였고, 곡마다 2개의 포인트 안무를 통해 총 200개의 포인트 댄스모션 데이터베이스를 구축하였다. 실험결과 제안된 방법은 89.5%의 성능을 나타낸다.

Storing and Retrieving Motion Capture Data based on Motion Capture Markup Language and Fuzzy Search (MCML 기반 모션캡처 데이터 저장 및 퍼지 기반 모션 검색 기법)

  • Lee, Sung-Joo;Chung, Hyun-Sook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.270-275
    • /
    • 2007
  • Motion capture technology is widely used for manufacturing animation since it produces high quality character motion similar to the actual motion of the human body. However, motion capture has a significant weakness due to the lack of an industry wide standard for archiving and retrieving motion capture data. In this paper, we propose a framework to integrate, store and retrieve heterogeneous motion capture data files effectively. We define a standard format for integrating different motion capture file formats. Our standard format is called MCML (Motion Capture Markup Language). It is a markup language based on XML (eXtensible Markup Language). The purpose of MCML is not only to facilitate the conversion or integration of different formats, but also to allow for greater reusability of motion capture data, through the construction of a motion database storing the MCML documents. We propose a fuzzy string searching method to retrieve certain MCML documents including strings approximately matched with keywords. The method can be used to retrieve desired series of frames included in MCML documents not entire MCML documents.

Emotion Communication through MotionTypography Based on Movement Analysis (모션타이포그래피의 움직임을 통한 감성전달)

  • Son, Min-Jeong;Lee, Hyun-Ju
    • Journal of Digital Contents Society
    • /
    • v.12 no.4
    • /
    • pp.541-550
    • /
    • 2011
  • MotionTypography is crucial to effective emotional communication in digital society. In this paper, I study movements to represent emotion using motiontypography and approach two goals: First to define an emotional measure by means of emotion assesses by the public, Second to research image characteristics corresponding to movements. In this dissertation, we collect emotional words by literature and experimental surveys and extract representative emotional words using KJ method and clustering analysis. The results of research, the emotional axes selected for motiontypography represent 'calm to active' and 'soft to stiff', the viewers feel a specific emotional state from some movements of motiontypography. If, we investigate the relationship of motiontypographic visual elements with emotional words are achieved together, I think it will serve as a motiontypographic guideline that enables helping the public to easily produce motiontypography.

Presentation control of the computer using the motion identification rules (모션 식별 룰을 이용한 컴퓨터의 프레젠테이션 제어)

  • Lee, Sang-yong;Lee, Kyu-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.586-589
    • /
    • 2015
  • A computer presentation system by using hand-motion identification rules is proposed. To identify hand motions of a presenter, a face region is extracted first using haar classifier. A motion status(patterns) and position of hands is discriminated using the center of gravities of user's face and hand after segmenting the hand area on the YCbCr color model. User's hand is applied to the motion detection rules and then presentation control command is then executed. The proposed system utilizes the motion identification rules without the use of additional equipment and it is then capable of controlling the presentation and does not depend on the complexity of the background. The proposed algorithm confirmed the stable control operation via the presentation of the experiment in the dark illumination range of indoor atmosphere (lx) 15-20-30.

  • PDF

Implementation of Virtual Realily Immersion System using Motion Vectors (모션벡터를 이용한 가상현실 체험 시스템의 구현)

  • 서정만;정순기
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.3
    • /
    • pp.87-93
    • /
    • 2003
  • The purpose of this research is to develop a virtual reality system which enables to actually experience the virtual reality through the visual sense of human. TSS was applied in tracing the movement of moving picture in this research. By applying TSS, it was possible to calculate multiple motion vectors from moving picture, and then camera's motion parameters were obtained by utilizing the relationship between the motion vectors. For the purpose of experiencing the virtual reality by synchronizing the camera's accelerated velocity and the simulator's movements, the relationship between the value of camera's accelerated velocity and the simulator's movements was analyzed and its result was applied to the neutral network training. It has been proved that the proposed virtual reality immersion system in this dissertation can dynamically control the movements of moving picture and can also operate the simulator quite similarly to the real movements of moving picture.

  • PDF

Improved Extraction of Representative Motion Vector Using Background Information in Digital Cinema Environment (디지털 시네마 환경에서 배경정보를 이용한 대표 움직임 정보 추출)

  • Park, Il-Cheol;Kwon, Goo-Rak
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.731-736
    • /
    • 2012
  • Latest digital cinema is getting more interest on recent days. The combination of visually immersive 3D movie with chair movements and other physical effects has added more enjoyment. The movement of the chair is controlled manually in these digital cinemas. By the analysis of the digital cinema's video sequences, movement of the chair can be controlled automatically. In the proposed method first of all the motion of focused object and the background is identified and then the motion vector information is extracted by using the 9-search range. The motion vector is determined only for the movement of background while the object is stationary. The extracted Motion information from the digital cinemas is used for the movement control of the chair. The experimental results show that the proposed method outperforms the existing methods in terms of accuracy.

Analysis of causal factors and physical reactions according to visually induced motion sickness (시각적으로 유발되는 어지럼증(VIMS)에 따른 신체적 반응 및 유발 요인 분석)

  • Lee, Chae-Won;Choi, Min-Kook;Kim, Kyu-Sung;Lee, Sang-Chul
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.11-21
    • /
    • 2014
  • We present an experimental framework to analyze the physical reactions and causal factors of Visually Induced Motion Sickness (VIMS) using electroencephalography (EEG) signals and vital signs. We studied eleven subjects who are voluntarily participated in the experiments and conducted online and offline surveys. In order to simulate videos including global motions that could cause the motion sickness, we extracted global motions by optical flow estimation method from hand-held captured video recordings containing intense motions. Then, we applied the extracted global motions to our test videos with action movies and texts. Each genre of video includes three levels of different motions depending on its intensity. EEG signal and vital sign that were measured by a portable electrocorticography device and an electronic monometer in real time while the subjects watch the videos including ones with the extracted motions. We perform an analysis of the EEG signals using Distance Map(DM) calculated by correlation among each channel of brain signal. Analysis using the vital signs and the survey results is also performed to obtain relationship between the VIMS and causal factors. As a result, we clustered subjects into three groups based on the analysis of the physical reaction using the DM and the correlation between vital sign and survey results, which shows high relationships between the VIMS and the intensity of motions.

Technical Trend of Motion Capture (모션캡처 기술 동향)

  • Lee, Min-Gi;Park, Seong-Gyu;Park, Geun-Pyo;Yang, Seon-U;Lee, Beom-Ryeol
    • Electronics and Telecommunications Trends
    • /
    • v.22 no.4 s.106
    • /
    • pp.35-42
    • /
    • 2007
  • 1990년부터 디지털콘텐츠 제작에 사용되고 있는 모션캡처 기술은 제작비 절감, 콘텐츠의 리얼리티(reality) 향상이라는 장점으로 인해 지속적으로 활용되어 왔다. 최근 디지털콘텐츠가 고품질화 함에 따라 리얼리티의 중요도가 매우 커졌으며, 이로 인해 디지털액터를 리얼하게 움직이기 위한 모션캡처 기술의 활용이 지속적으로 증가하고 있다. 본 고에서는 모션캡처를 데이터 추출 방식에 따라 분류하고, 각각의 장단점에 대해 설명한다. 또한 산업계를 중심으로 개발되고 있는 상용화 기술 동향에 대해 기술하고, 향후 발전 방향을 제시한다.

Passing Vehicle Detection using Local Binary Pattern Histogram (국부이진패턴 히스토그램을 이용한 측면 차량 검출)

  • Kang, Hyung-Sub;Cho, Dong-Chan;Ko, Kyung-Woo;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.260-263
    • /
    • 2010
  • 본 논문에서는 주행 중인 차량에서 전방을 향해 장착된 카메라를 통해 입력된 영상에서 측면에 부분적으로 나타나는 차량을 검출하는 방법을 제안한다. 기존 연구에서는 모션 벡터를 이용하여 주변 배경과 관측되는 차량 사이의 모션 벡터 차이를 이용하여 측면 차량을 검출하고 있다. 그러나 모션 벡터를 이용할 경우 정지된 차량이나 전방에서 다가오는 차량의 경우 검출하기 어려운 점이 있다. 이러한 문제를 해결하기 위해 본 논문에서는 모션 벡터를 사용하지 않고 차량 측면 모습에서 특징 정보를 추출하여 SVM 분류기를 통해 측면 차량을 검출하는 방법을 제안한다. 차량 측면 모습의 특징을 뽑기 위해 영상의 밝기 변화에 강인한 국부 이진 패턴을 사용하였고 ROI영역 내에서 차량이 나타나는 위치에 상관없이 차량의 측면 모습을 찾아내기 위해 국부 이진 패턴의 히스토그램을 이용하였다. 실험결과에서는 제안하는 방법이 정지된 차량을 포함하여 88.5%의 정확도로 측면 차량을 검출하는 것을 확인하였다.

  • PDF

Real-Time Multiple Action Recognition on Video using Motion Gradient Histogram (동영상에서 MGH을 이용한 실시간 다수 동작 인식)

  • Kim Tae-Hyoung;Byun Hye-Ran
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.325-327
    • /
    • 2006
  • 본 논문은 모션 그래디언트 히스토그램(Motion Gradient Histogram : 이하 'MGH')을 적용하여 동영상에서 나타나는 다수 객체들의 동작 검출 및 인식을 실시간으로 구현하는 방법을 제안한다. 인식하고자 하는 대상에 대한 기본적인 템플릿 동영상들의 MGH와 일정 프레임 간격마다 동영상의 MGH를 비교하여 검출 및 인식이 이루어진다. 동시에 다수의 동작이 있는 경우 동작이 발생하는 영역을 모션 에너지 영상(Motion Energy Image : MEI) 기법으로 추출하여 해당 영역별 MGH를 구함으로써 다수 동작을 인식할 수 있도록 한다.

  • PDF