• Title/Summary/Keyword: 3-dimensional pose

Search Result 63, Processing Time 0.029 seconds

Mobile Augmented Visualization Technology Using Vive Tracker (포즈 추적 센서를 활용한 모바일 증강 가시화 기술)

  • Lee, Dong-Chun;Kim, Hang-Kee;Lee, Ki-Suk
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.41-48
    • /
    • 2021
  • This paper introduces a mobile augmented visualization technology that augments a three-dimensional virtual human body on a mannequin model using two pose(position and rotation) tracking sensors. The conventional camera tracking technology used for augmented visualization has the disadvantage of failing to calculate the camera pose when the camera shakes or moves quickly because it uses the camera image, but using a pose tracking sensor can overcome this disadvantage. Also, even if the position of the mannequin is changed or rotated, augmented visualization is possible using the data of the pose tracking sensor attached to the mannequin, and above all there is no load for camera tracking.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Interface of Interactive Contents using Vision-based Body Gesture Recognition (비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스)

  • Park, Jae Wan;Song, Dae Hyun;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.40-46
    • /
    • 2012
  • In this paper, we describe interactive contents which is used the result of the inputted interface recognizing vision-based body gesture. Because the content uses the imp which is the common culture as the subject in Asia, we can enjoy it with culture familiarity. And also since the player can use their own gesture to fight with the imp in the game, they are naturally absorbed in the game. And the users can choose the multiple endings of the contents in the end of the scenario. In the part of the gesture recognition, KINECT is used to obtain the three-dimensional coordinates of each joint of the limb to capture the static pose of the actions. The vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part Because gestures can be presented through sequential static poses, we recognize the gestures which are configured poses by using HMM In this paper, we describe the interactive content which is used as input interface by using gesture recognition result. So, we can control the contents using only user's gestures naturally. And we intended to improve the immersion and the interest by using the imp who is used real-time interaction with user.

  • PDF

A Dangerous Situation Recognition System Using Human Behavior Analysis (인간 행동 분석을 이용한 위험 상황 인식 시스템 구현)

  • Park, Jun-Tae;Han, Kyu-Phil;Park, Yang-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

Depth Image Poselets via Body Part-based Pose and Gesture Recognition (신체 부분 포즈를 이용한 깊이 영상 포즈렛과 제스처 인식)

  • Park, Jae Wan;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.5 no.2
    • /
    • pp.15-23
    • /
    • 2016
  • In this paper we propose the depth-poselets using body-part-poses and also propose the method to recognize the gesture. Since the gestures are composed of sequential poses, in order to recognize a gesture, it should emphasize to obtain the time series pose. Because of distortion and high degree of freedom, it is difficult to recognize pose correctly. So, in this paper we used partial pose for obtaining a feature of the pose correctly without full-body-pose. In this paper, we define the 16 gestures, a depth image using a learning image was generated based on the defined gestures. The depth poselets that were proposed in this paper consists of principal three-dimensional coordinates of the depth image and its depth image of the body part. In the training process after receiving the input defined gesture by using a depth camera in order to train the gesture, the depth poselets were generated by obtaining 3D joint coordinates. And part-gesture HMM were constructed using the depth poselets. In the testing process after receiving the input test image by using a depth camera in order to test, it extracts foreground and extracts the body part of the input image by comparing depth poselets. And we check part gestures for recognizing gesture by using result of applying HMM. We can recognize the gestures efficiently by using HMM, and the recognition rates could be confirmed about 89%.

Skeleton-based 3D Pointcloud Registration Method (스켈레톤 기반의 3D 포인트 클라우드 정합 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.89-90
    • /
    • 2021
  • 본 논문에서는 3D(dimensional) 스켈레톤을 이용하여 멀티 뷰 RGB-D 카메라를 캘리브레이션 하는 새로운 기법을 제안하고자 한다. 멀티 뷰 카메라를 캘리브레이션 하기 위해서는 일관성 있는 특징점이 필요하다. 우리는 다시점 카메라를 캘리브레이션 하기 위한 특징점으로 사람의 스켈레톤을 사용한다. 사람의 스켈레톤은 최신의 자세 추정(pose estimation) 알고리즘들을 이용하여 쉽게 구할 수 있게 되었다. 우리는 자세 추정 알고리즘을 통해서 획득된 3D 스켈레톤의 관절 좌표를 특징점으로 사용하는 RGB-D 기반의 캘리브레이션 알고리즘을 제안한다.

  • PDF

3D Pose Estimation of a Circular Feature With a Coplanar Point (공면 점을 포함한 원형 특징의 3차원 자세 및 위치 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.13-24
    • /
    • 2011
  • This paper deals with a 3D-pose (orientation and position) estimation problem of a circular object in 3D-space. Circular features can be found with many objects in real world, and provide crucial cues in vision-based object recognition and location. In general, as a circular feature in 3D space is perspectively projected when imaged by a camera, it is difficult to recover fully three-dimensional orientation and position parameters from the projected curve information. This paper therefore proposes a 3D pose estimation method of a circular feature using a coplanar point. We first interpret a circular feature with a coplanar point in both the projective space and 3D space. A procedure for estimating 3D orientation/position parameters is then described. The proposed method is verified by a numerical example, and evaluated by a series of experiments for analyzing accuracy and sensitivity.

A Study on Hand-signal Recognition System in 3-dimensional Space (3차원 공간상의 수신호 인식 시스템에 대한 연구)

  • 장효영;김대진;김정배;변증남
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.103-114
    • /
    • 2004
  • This paper deals with a system that is capable of recognizing hand-signals in 3-dimensional space. The system uses 2 color cameras as input devices. Vision-based gesture recognition system is known to be user-friendly because of its contact-free characteristic. But as with other applications using a camera as an input device, there are difficulties under complex background and varying illumination. In order to detect hand region robustly from a input image under various conditions without any special gloves or markers, the paper uses previous position information and adaptive hand color model. The paper defines a hand-signal as a combination of two basic elements such as 'hand pose' and 'hand trajectory'. As an extensive classification method for hand pose, the paper proposes 2-stage classification method by using 'small group concept'. Also, the paper suggests a complementary feature selection method from images from two color cameras. We verified our method with a hand-signal application to our driving simulator.

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.