• Title/Summary/Keyword: pose variation

Search Result 61, Processing Time 0.022 seconds

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.

A Study on the Assistive System for Body Correction (신체 교정을 위한 보조 시스템에 관한 연구)

  • Kim, Ho-Joon;Chung, Jae-Pil
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.4
    • /
    • pp.231-235
    • /
    • 2011
  • In these day, the number of people who have an abnormal posture caused by bad habit are increasing. Therefore, people suffer various disease and symptoms. For correcting the posture to cure, we need continuous monitor, expenditure of time and money. In this study, we develop a posture correcting aid system in other to monitor a posture continuously and leads to pose correctly and records postural variation which are attached to the neck and the waist. The devised system showed good potential for the correct posture guide and a cure of postural defect.

Krein Space Robust Extended Kalman filter Design for Pose Estimation of Mobile Robots with Wheelbase Uncertainties (휠베이스에 불확실성을 갖는 이동로봇의 자세 추정을 위한 크라인 스페이스 강인 확장 칼만 필터의 설계)

  • Jin, Seung-Hee;Yoon, Tae-Sung;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.433-436
    • /
    • 2003
  • The estimation of the position and the orientation for the mobile robot constitutes an important problem in mobile robot navigation. Although the odometry can be used to describe the motions of the mobile robots, there inherently exist the gaps between the real robots and the mathematical model, which may be caused by a number of error sources contaminating the encoder outputs. Hence, applying the standard extended Kalman filter for the nominal model is not supposed to give the satisfactory performance. As a solution to this problem, a new robust extended Kalman filter is proposed based on the Krein space approach. We consider the uncertain discrete time nonlinear model of the mobile robot that contains the uncertainties represented as sum quadratic constraints. The proposed robust filter has the merit of being constructed by the same recursive structure as the standard extended Kalman filter and can, therefore, be easily designed to effectively account for the uncertainties. The simulations will be given to verify the robustness against the parameter variation as veil as the reliable performance of the proposed robust filter.

  • PDF

A Multi-Scale Parallel Convolutional Neural Network Based Intelligent Human Identification Using Face Information

  • Li, Chen;Liang, Mengti;Song, Wei;Xiao, Ke
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1494-1507
    • /
    • 2018
  • Intelligent human identification using face information has been the research hotspot ranging from Internet of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent access control. Since 2D face images are usually captured from a long distance in an unconstrained environment, to fully exploit this advantage and make human recognition appropriate for wider intelligent applications with higher security and convenience, the key difficulties here include gray scale change caused by illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by pose or expression variation. To conquer these, many solutions have been proposed. However, most of them only improve recognition performance under one influence factor, which still cannot meet the real face recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed algorithm exhibits excellent discriminative ability compared with other existing algorithms.

A Multimodal Fusion Method Based on a Rotation Invariant Hierarchical Model for Finger-based Recognition

  • Zhong, Zhen;Gao, Wanlin;Wang, Minjuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.131-146
    • /
    • 2021
  • Multimodal biometric-based recognition has been an active topic because of its higher convenience in recent years. Due to high user convenience of finger, finger-based personal identification has been widely used in practice. Hence, taking Finger-Print (FP), Finger-Vein (FV) and Finger-Knuckle-Print (FKP) as the ingredients of characteristic, their feature representation were helpful for improving the universality and reliability in identification. To usefully fuse the multimodal finger-features together, a new robust representation algorithm was proposed based on hierarchical model. Firstly, to obtain more robust features, the feature maps were obtained by Gabor magnitude feature coding and then described by Local Binary Pattern (LBP). Secondly, the LGBP-based feature maps were processed hierarchically in bottom-up mode by variable rectangle and circle granules, respectively. Finally, the intension of each granule was represented by Local-invariant Gray Features (LGFs) and called Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). Experiment results revealed that the proposed algorithm is capable of improving rotation variation of finger-pose, and achieving lower Equal Error Rate (EER) in our homemade database.

Method for simultaneous analysis of bisphenols and phthalates in corn oil via liquid chromatography-tandem mass spectrometry

  • Min-Chul Shin;Hee-Jin Jeong;Seoung-Min Lee;Jong-Su Seo;Jong-Hwan Kim
    • Analytical Science and Technology
    • /
    • v.37 no.5
    • /
    • pp.271-279
    • /
    • 2024
  • Bisphenols and phthalates are endocrine-disrupting chemicals that are commonly used in packaging and as plasticizers. However, they pose health risks through ingestion, inhalation, and dermal contact. Accurate analysis of these pollutants is challenging owing to their low concentration and their presence in complex oil matrices. Therefore, they require efficient extraction and detection methods. In this study, an analytical method for the simultaneous quantification of bisphenols and phthalates in corn oil is developed. The dynamic multiple reaction monitoring mode of liquid chromatography-tandem mass spectrometry is used according to the different polarities of bisphenols and phthalates. The method is validated by assessing system suitability, linearity, accuracy, precision, homogeneity, and stability. The determination coefficients are higher than 0.99, which is acceptable. The percentage recovery and coefficient of variation of the accuracy and precision confirm that this analytical method is capable of simultaneously quantifying bisphenols and phthalates in corn oil. The bisphenols and phthalates in the formulations and pretreatment samples are stable for 7 d at room temperature and 24 h in an auto-sampler. Therefore, this validated analytical method is effective for the simultaneous quantification of bisphenols and phthalates in oils.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Face Detection Algorithm Using Pulse-Coupled Neural Network in Color Images (컬러영상에서 Pulse-Coupled Neural Network를 이용한 얼굴 추출 알고리즘)

  • Lim, Young-Wan;Na, Jin-Hee;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.617-622
    • /
    • 2004
  • In this work, we suggested the method which improves the efficiency of the face detection algorithm using Pulse-Coupled Neural Network. Face detection algorithm which uses the color information is independent on pose, size and obstruction of a face. But the use of color information encounters some problems arising from skin-tone color in the background, intensity variation within faces, and presence of random noise, and so on. Depending on these conditions, we obtained the mean and variance of the skin-tone colors by experiments. Then we introduce a preprocess that the pixel with a mean value of skin-tone colors has highest level value (255) and the other pixels in the skin-tone region have values between 0 and 255 according to a normal distribution with a variance. This preprocess leads to an easy decision of the linking coefficient of Pulse-Coupled Neural Network.

An Improved Face Detection Method Using a Hybrid of Hausdorff and LBP Distance (Hausdorff와 LBP 거리의 융합을 이용한 개선된 얼굴검출)

  • Park, Seong-Chun;Koo, Ja-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.67-73
    • /
    • 2010
  • In this paper, a new face detection method that is more accurate than the conventional methods is proposed. This method utilizes a hybrid of Hausdorff distance based on the geometric similarity between the two sets of points and the LBP distance based on the distribution of local micro texture of an image. The parameters for normalization and the optimal blending factor of the two different metrics were calculated from training sample images. Popularly used face database was used to show that the proposed method is more effective and robust to the variation of the pose, illumination, and back ground than the methods based on the Hausdorff distance or LBP distance. In the particular case, the average error distance between the detected and the true face location was reduced to 47.9% of the result of LBP method, and 22.8% of the result of Hausdorff method.