• Title/Summary/Keyword: Pose matching

Search Result 100, Processing Time 0.029 seconds

A Study on the Gesture Matching Method for the Development of Gesture Contents (체감형 콘텐츠 개발을 위한 연속동작 매칭 방법에 관한 연구)

  • Lee, HyoungGu
    • Journal of Korea Game Society
    • /
    • v.13 no.6
    • /
    • pp.75-84
    • /
    • 2013
  • The recording and matching method of pose and gesture based on PC-window platform is introduced in this paper. The method uses the gesture detection camera, Xtion which is for the Windows PC. To develop the method, the API is first developed which processes and compares the depth data, RGB image data, and skeleton data obtained using the camera. The pose matching method which selectively compares only valid joints is developed. For the gesture matching, the recognition method which can differentiate the wrong pose between poses is developed. The tool which records and tests the sample data to extract the specified pose and gesture is developed. 6 different pose and gesture were captured and tested. Pose was recognized 100% and gesture was recognized 99%, so the proposed method was validated.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

A Method of Pose Matching Rate Acquisition Using The Angle of Rotation of Joint (관절의 회전각을 이용한 자세 매칭률 획득 방법)

  • Hyeon, Hun-Beom;Song, Su-Ho;Lee, Hyun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.183-191
    • /
    • 2016
  • Recently, in rehabilitation treatment, the situation that requires a measure of the accuracy of the pose and movement of joints is being increased due to the habits and lifestyle of modern people and the environment. In particular, there is a need for active automated system that can determine itself for the matching rate of pose Basically, a method for measuring the matching rate of pose is used by extracting an image using the Kinect or extracting a silhouette using the imaging device. However, in the case of extracting a silhouette, it is difficult to set the comparison, and in the case of using the Kinect sensor, there is a disadvantages that high accumulated error rate according to movement. Therefore, In this paper, we propose a method to reduce the accumulated error of matching rate of pose getting the rotation angle of joint by measuring the real-time amount of change of 9-axis sensor. In particular, it can be measured same conditions that unrelated of the physical condition and unaffected by the data for the back and forth movement, because of it compares the current rotation angle of the joint. Finally, we show a comparative advantage results by compared with traditional method of extracting a silhouette and a method using a Kinect sensor.

An Algorithm for a pose estimation of a robot using Scale-Invariant feature Transform

  • Lee, Jae-Kwang;Huh, Uk-Youl;Kim, Hak-Il
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.517-519
    • /
    • 2004
  • This paper describes an approach to estimate a robot pose with an image. The algorithm of pose estimation with an image can be broken down into three stages : extracting scale-invariant features, matching these features and calculating affine invariant. In the first step, the robot mounted mono camera captures environment image. Then feature extraction is executed in a captured image. These extracted features are recorded in a database. In the matching stage, a Random Sample Consensus(RANSAC) method is employed to match these features. After matching these features, the robot pose is estimated with positions of features by calculating affine invariant. This algorithm is implemented and demonstrated by Matlab program.

  • PDF

Pose Estimation and Image Matching for Tidy-up Task using a Robot Arm (로봇 팔을 활용한 정리작업을 위한 물체 자세추정 및 이미지 매칭)

  • Piao, Jinglan;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.4
    • /
    • pp.299-305
    • /
    • 2021
  • In this study, the task of robotic tidy-up is to clean the current environment up exactly like a target image. To perform a tidy-up task using a robot, it is necessary to estimate the pose of various objects and to classify the objects. Pose estimation requires the CAD model of an object, but these models of most objects in daily life are not available. Therefore, this study proposes an algorithm that uses point cloud and PCA to estimate the pose of objects without the help of CAD models in cluttered environments. In addition, objects are usually detected using a deep learning-based object detection. However, this method has a limitation in that only the learned objects can be recognized, and it may take a long time to learn. This study proposes an image matching based on few-shot learning and Siamese network. It was shown from experiments that the proposed method can be effectively applied to the robotic tidy-up system, which showed a success rate of 85% in the tidy-up task.

Human Pose Matching Using Skeleton-type Active Shape Models (뼈대-구조 능동형태모델을 이용한 사람의 자세 정합)

  • Jang, Chang-Hyuk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.996-1008
    • /
    • 2009
  • This paper proposes a novel approach for the model-based pose matching of a human body using Active Shape Models. To improve the processing time of model creation and registration, we use a skeleton-type model instead of the conventional silhouette-based models. The skeleton model defines feature information that is used to match the human pose. Images used to make the model are for 600 human bodies, and the model has 17 landmarks which indicate the body junction and key features of a human pose. When applying primary Active Shape Models to the skeleton-type model in the matching process, a problem may occur in the proximal joints of the arm and leg due to the color variations on a human body and the insufficient information for the fore-rear directions of profile normals. This problem is solved by using the background subtraction information of a body region in the input image and adding a 4-directions feature of the profile normal in the proximal parts of the arm and leg. In the matching process, the maximum iteration is less than 30 times. As a result, the execution time is quite fast, and was observed to be less than 0.03 sec in an experiment.

A Fast Correspondence Matching for Iterative Closest Point Algorithm (ICP 계산속도 향상을 위한 빠른 Correspondence 매칭 방법)

  • Shin, Gunhee;Choi, Jaehee;Kim, Kwangki
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.373-380
    • /
    • 2022
  • This paper considers a method of fast correspondence matching for iterative closest point (ICP) algorithm. In robotics, the ICP algorithm and its variants have been widely used for pose estimation by finding the translation and rotation that best align two point clouds. In computational perspectives, the main difficulty is to find the correspondence point on the reference point cloud to each observed point. Jump-table-based correspondence matching is one of the methods for reducing computation time. This paper proposes a method that corrects errors in an existing jump-table-based correspondence matching algorithm. The criterion activating the use of jump-table is modified so that the correspondence matching can be applied to the situations, such as point-cloud registration problems with highly curved surfaces, for which the existing correspondence-matching method is non-applicable. For demonstration, both hardware and simulation experiments are performed. In a hardware experiment using Hokuyo-10LX LiDAR sensor, our new algorithm shows 100% correspondence matching accuracy and 88% decrease in computation time. Using the F1TENTH simulator, the proposed algorithm is tested for an autonomous driving scenario with 2D range-bearing point cloud data and also shows 100% correspondence matching accuracy.

Hausdorff Distance Matching for Elevation Map-based Global Localization of an Outdoor Mobile Robot (실외 이동로봇의 고도지도 기반의 전역 위치추정을 위한 Hausdorff 거리 정합 기법)

  • Ji, Yong-Hoon;Song, Jea-Bok;Baek, Joo-Hyun;Ryu, Jae-Kwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.9
    • /
    • pp.916-921
    • /
    • 2011
  • Mobile robot localization is the task of estimating the robot pose in a given environment. This research deals with outdoor localization based on an elevation map. Since outdoor environments are large and contain many complex objects, it is difficult to robustly estimate the robot pose. This paper proposes a Hausdorff distance-based map matching method. The Hausdorff distance is exploited to measure the similarity between extracted features obtained from the robot and elevation map. The experiments and simulations show that the proposed Hausdorff distance-based map matching is useful for robust outdoor localization using an elevation map. Also, it can be easily applied to other probabilistic approaches such as a Markov localization method.

The Estimation of the Transform Parameters Using the Pattern Matching with 2D Images (2차원 영상에서 패턴매칭을 이용한 3차원 물체의 변환정보 추정)

  • 조택동;이호영;양상민
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.7
    • /
    • pp.83-91
    • /
    • 2004
  • The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision or space resection in photogrammetry. This paper discusses estimation of transform parameters using the pattern matching method with 2D images only. In general, the 3D reference points or lines are needed to find out the 3D transform parameters, but this method is applied without the 3D reference points or lines. It uses only two images to find out the transform parameters between two image. The algorithm is simulated using Visual C++ on Windows 98.

3D Object Recognition and Accurate Pose Calculation Using a Neural Network (인공신경망을 이용한 삼차원 물체의 인식과 정확한 자세계산)

  • Park, Gang
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.23 no.11 s.170
    • /
    • pp.1929-1939
    • /
    • 1999
  • This paper presents a neural network approach, which was named PRONET, to 3D object recognition and pose calculation. 3D objects are represented using a set of centroidal profile patterns that describe the boundary of the 2D views taken from evenly distributed view points. PRONET consists of the training stage and the execution stage. In the training stage, a three-layer feed-forward neural network is trained with the centroidal profile patterns using an error back-propagation method. In the execution stage, by matching a centroidal profile pattern of the given image with the best fitting centroidal profile pattern using the neural network, the identity and approximate orientation of the real object, such as a workpiece in arbitrary pose, are obtained. In the matching procedure, line-to-line correspondence between image features and 3D CAD features are also obtained. An iterative model posing method then calculates the more exact pose of the object based on initial orientation and correspondence.