• Title/Summary/Keyword: Pose matching

Search Result 100, Processing Time 0.027 seconds

Dynamic Human Pose Tracking using Motion-based Search (모션 기반의 검색을 사용한 동적인 사람 자세 추적)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.7
    • /
    • pp.2579-2585
    • /
    • 2010
  • This paper proposes a dynamic human pose tracking method using motion-based search strategy from an image sequence obtained from a monocular camera. The proposed method compares the image features between 3D human model projections and real input images. The method repeats the process until predefined criteria and then estimates 3D human pose that generates the best match. When searching for the best matching configuration with respect to the input image, the search region is determined from the estimated 2D image motion and then search is performed randomly for the body configuration conducted within that search region. As the 2D image motion is highly constrained, this significantly reduces the dimensionality of the feasible space. This strategy have two advantages: the motion estimation leads to an efficient allocation of the search space, and the pose estimation method is adaptive to various kinds of motion.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Data Association of Robot Localization and Mapping Using Partial Compatibility Test (Partial Compatibility Test 를 이용한 로봇의 위치 추정 및 매핑의 Data Association)

  • Yan, Rui Jun;Choi, Youn Sung;Wu, Jing;Han, Chang Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.2
    • /
    • pp.129-138
    • /
    • 2016
  • This paper presents a natural corners-based SLAM (Simultaneous Localization and Mapping) with a robust data association algorithm in a real unknown environment. Corners are extracted from raw laser sensor data, which are chosen as landmarks for correcting the pose of mobile robot and building the map. In the proposed data association method, the extracted corners in every step are separated into several groups with small numbers of corners. In each group, local best matching vector between new corners and stored ones is found by joint compatibility, while nearest feature for every new corner is checked by individual compatibility. All these groups with local best matching vector and nearest feature candidate of each new corner are combined by partial compatibility with linear matching time. Finally, SLAM experiment results in an indoor environment based on the extracted corners show good robustness and low computation complexity of the proposed algorithms in comparison with existing methods.

Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure (평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션)

  • Joung, Ji Hoon;Kang, Tae-Sun;Shin, Hyeon-Ho;Kim, SooJong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Sensor Model Design of Range Sensor Based Probabilistic Localization for the Autonomous Mobile Robot (자율 주행 로봇의 확률론적 자기 위치 추정기법을 위해 거리 센서를 이용한 센서 모델 설계)

  • Kim, Kyung-Rock;Chung, Woo-Jin;Kim, Mun-Sang
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.27-29
    • /
    • 2004
  • This paper presents a sensor model design based on Monte Carlo Localization method. First, we define the measurement error of each sample using a map matching method by 2-D laser scanners and a pre-constructed grid-map of the environment. Second, samples are assigned probabilities due to matching errors from the gaussian probability density function considered of the sample's convergence. Simulation using real environment data shows good localization results by the designed sensor model.

  • PDF

A Study on the Relative Localization Algorithm for Mobile Robots using a Structured Light Technique (Structured Light 기법을 이용한 이동 로봇의 상대 위치 추정 알고리즘 연구)

  • Noh Dong-Ki;Kim Gon-Woo;Lee Beom-Hee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.8
    • /
    • pp.678-687
    • /
    • 2005
  • This paper describes a relative localization algorithm using odometry data and consecutive local maps. The purpose of this paper is the odometry error correction using the area matching of two consecutive local maps. The local map is built up using a sensor module with dual laser beams and USB camera. The range data form the sensor module is measured using the structured lighting technique (active stereo method). The advantage in using the sensor module is to be able to get a local map at once within the camera view angle. With this advantage, we propose the AVS (Aligned View Sector) matching algorithm for. correction of the pose error (translational and rotational error). In order to evaluate the proposed algorithm, experiments are performed in real environment.

Visual Positioning System based on Voxel Labeling using Object Simultaneous Localization And Mapping

  • Jung, Tae-Won;Kim, In-Seon;Jung, Kye-Dong
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.302-306
    • /
    • 2021
  • Indoor localization is one of the basic elements of Location-Based Service, such as indoor navigation, location-based precision marketing, spatial recognition of robotics, augmented reality, and mixed reality. We propose a Voxel Labeling-based visual positioning system using object simultaneous localization and mapping (SLAM). Our method is a method of determining a location through single image 3D cuboid object detection and object SLAM for indoor navigation, then mapping to create an indoor map, addressing it with voxels, and matching with a defined space. First, high-quality cuboids are created from sampling 2D bounding boxes and vanishing points for single image object detection. And after jointly optimizing the poses of cameras, objects, and points, it is a Visual Positioning System (VPS) through matching with the pose information of the object in the voxel database. Our method provided the spatial information needed to the user with improved location accuracy and direction estimation.

Point Pattern Matching Based Global Localization using Ceiling Vision (천장 조명을 이용한 점 패턴 매칭 기반의 광역적인 위치 추정)

  • Kang, Min-Tae;Sung, Chang-Hun;Roh, Hyun-Chul;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1934-1935
    • /
    • 2011
  • In order for a service robot to perform several tasks, basically autonomous navigation technique such as localization, mapping, and path planning is required. The localization (estimation robot's pose) is fundamental ability for service robot to navigate autonomously. In this paper, we propose a new system for point pattern matching based visual global localization using spot lightings in ceiling. The proposed algorithm us suitable for system that demands high accuracy and fast update rate such a guide robot in the exhibition. A single camera looking upward direction (called ceiling vision system) is mounted on the head of the mobile robot and image features such as lightings are detected and tracked through the image sequence. For detecting more spot lightings, we choose wide FOV lens, and inevitably there is serious image distortion. But by applying correction calculation only for the position of spot lightings not whole image pixels, we can decrease the processing time. And then using point pattern matching and least square estimation, finally we can get the precise position and orientation of the mobile robot. Experimental results demonstrate the accuracy and update rate of the proposed algorithm in real environments.

  • PDF

A Flexible Feature Matching for Automatic Facial Feature Points Detection (얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • Hwang, Suen-Ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.2
    • /
    • pp.12-17
    • /
    • 2010
  • An automatic facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features and the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image space by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the system.

  • PDF