• Title/Summary/Keyword: Object Feature Extraction

Search Result 266, Processing Time 0.027 seconds

Background Segmentation in Color Image Using Self-Organizing Feature Selection (자기 조직화 기법을 활용한 컬러 영상 배경 영역 추출)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.407-412
    • /
    • 2008
  • Color segmentation is one of the most challenging problems in image processing especially in case of handling the images with cluttered background. Great amount of color segmentation methods have been developed and applied to real problems. In this paper, we suggest a new methodology. Our approach is focused on background extraction, as a complimentary operation to standard foreground object segmentation, using self-organizing feature selective property of unsupervised self-learning paradigm based on the competitive algorithm. The results of our studies show that background segmentation can be achievable in efficient manner.

Face Feature Extraction for Face Recognition (얼굴 인식을 위한 얼굴 특징점 추출)

  • Yang, Ryong;Chae, Duk-Jae;Lee, Sang-Bum
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.12
    • /
    • pp.1765-1774
    • /
    • 2002
  • A face recognition is currently the field which many research have been processed actively. But many problems must be solved the previous problem. First, We must recognize the face of the object taking a location various lighting change and change of the camera into account. In this paper, we proposed that new method to fund feature within fast and correct computation time after scanning PC camera and ID card picture. It converted RGB color space to YUV. A face skin color extracts which equalize a histogram of Y ingredient without the luminance. After, the method use V' ingredient which transformes V ingredient of YUV and then find the face feature. The reult of the experiment shows getting correct input face image from ID Card picture and PC camera.

  • PDF

Modified YOLOv4S based on Deep learning with Feature Fusion and Spatial Attention (특징 융합과 공간 강조를 적용한 딥러닝 기반의 개선된 YOLOv4S)

  • Hwang, Beom-Yeon;Lee, Sang-Hun;Lee, Seung-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.31-37
    • /
    • 2021
  • In this paper proposed a feature fusion and spatial attention-based modified YOLOv4S for small and occluded detection. Conventional YOLOv4S is a lightweight network and lacks feature extraction capability compared to the method of the deep network. The proposed method first combines feature maps of different scales with feature fusion to enhance semantic and low-level information. In addition expanding the receptive field with dilated convolution, the detection accuracy for small and occluded objects was improved. Second by improving the conventional spatial information with spatial attention, the detection accuracy of objects classified and occluded between objects was improved. PASCAL VOC and COCO datasets were used for quantitative evaluation of the proposed method. The proposed method improved mAP by 2.7% in the PASCAL VOC dataset and 1.8% in the COCO dataset compared to the Conventional YOLOv4S.

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

Splitting Rules using Intervals for Object Classification in Image Databases (이미지 데이터베이스에서 인터벌을 이용한 객체분류를 위한 분리 방법)

  • Cho, June-Suh;Choi, Joon-Soo
    • The KIPS Transactions:PartD
    • /
    • v.12D no.6 s.102
    • /
    • pp.829-836
    • /
    • 2005
  • The way to assign a splitting criterion for correct object classification is the main issue in all decisions trees. This paper describes new splitting rules for classification in order to find an optimal split point. Unlike the current splitting rules that are provided by searching all threshold values, this paper proposes the splitting rules that we based on the probabilities of pre assigned intervals. Our methodology provides that user can control the accuracy of tree by adjusting the number of intervals. In addition, we applied the proposed splitting rules to a set of image data that was retrieved by parameterized feature extraction to recognize image objects.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.

3D surface Reconstruction of Moving Object Using Multi-Laser Stripes Irradiation (멀티 레이저 라인 조사를 이용한 비등속 이동물체의 3차원 형상 복원)

  • Yi, Young-Youl;Ye, Soo-Young;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.144-152
    • /
    • 2007
  • We propose a 3D modeling method for surface inspection of non-linear moving object. The laser lines reflect the surface curvature. We can acquire 3D surface information by analyzing projected laser lines on object. ill this paper, we use multi-line laser to make use of robust of single stripe method and high speed of single frame. Binarization and channel edge extraction method were used for robust laser line extraction. A new labeling method was used for laser line labeling. We acquired sink information between each 3D reconstructed frame by feature point matching, and registered each frame to one whole image. We verified the superiority of proposed method by applying it to container damage inspection system.

A Study on a 3D Modeling for surface Inspection of a Moving Object (비등속 이동물체의 표면 검사를 위한 3D 모델링 기술에 관한 연구)

  • Ye, Soo-Young;Yi, Young-Youl;Nam, Ki-Gon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.1
    • /
    • pp.15-21
    • /
    • 2007
  • We propose a 3D modeling method for surface inspection of non-constant velocity moving object. 1'lie laser lines reflect tile surface curvature. We can acquire 3D surface information by analyzing projected laser lines on object. In this paper, we use multi-line laser to improve the single stripe method and high speed of single frame. Binarization and edge extraction of frame image were proposed for robust laser each line extraction. A new labeling method was used for laser line labeling. We acquired some feature points for image matching from the frame data and juxtaposed the frames data to obtain a 3D shape image. We verified the superiority of proposed method by applying it to inspect container's damages.

  • PDF

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.