• Title/Summary/Keyword: Feature Region

Search Result 1,162, Processing Time 0.023 seconds

Facial Region Extraction in an Infrared Image (적외선 영상에서의 얼굴 영역 자동 추적)

  • Shin, S.W.;Kim, K.S.;Yoon, T.H.;Han, M.H.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.57-59
    • /
    • 2005
  • In our study, the automatic tracking algorithm of a human face is proposed by utilizing the thermal properties and 2nd momented geometrical feature of an infrared image. First, the facial candidates are estimated by restricting the certain range of thermal values, and the spurious blobs cleaning algorithm is applied to track the refined facial region in an infrared image.

  • PDF

Discriminative Power Feature Selection Method for Motor Imagery EEG Classification in Brain Computer Interface Systems

  • Yu, XinYang;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.12-18
    • /
    • 2013
  • Motor imagery classification in electroencephalography (EEG)-based brain-computer interface (BCI) systems is an important research area. To simplify the complexity of the classification, selected power bands and electrode channels have been widely used to extract and select features from raw EEG signals, but there is still a loss in classification accuracy in the state-of- the-art approaches. To solve this problem, we propose a discriminative feature extraction algorithm based on power bands with principle component analysis (PCA). First, the raw EEG signals from the motor cortex area were filtered using a bandpass filter with ${\mu}$ and ${\beta}$ bands. This research considered the power bands within a 0.4 second epoch to select the optimal feature space region. Next, the total feature dimensions were reduced by PCA and transformed into a final feature vector set. The selected features were classified by applying a support vector machine (SVM). The proposed method was compared with a state-of-art power band feature and shown to improve classification accuracy.

Image Retrieval Based on the Weighted and Regional Integration of CNN Features

  • Liao, Kaiyang;Fan, Bing;Zheng, Yuanlin;Lin, Guangfeng;Cao, Congjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.894-907
    • /
    • 2022
  • The features extracted by convolutional neural networks are more descriptive of images than traditional features, and their convolutional layers are more suitable for retrieving images than are fully connected layers. The convolutional layer features will consume considerable time and memory if used directly to match an image. Therefore, this paper proposes a feature weighting and region integration method for convolutional layer features to form global feature vectors and subsequently use them for image matching. First, the 3D feature of the last convolutional layer is extracted, and the convolutional feature is subsequently weighted again to highlight the edge information and position information of the image. Next, we integrate several regional eigenvectors that are processed by sliding windows into a global eigenvector. Finally, the initial ranking of the retrieval is obtained by measuring the similarity of the query image and the test image using the cosine distance, and the final mean Average Precision (mAP) is obtained by using the extended query method for rearrangement. We conduct experiments using the Oxford5k and Paris6k datasets and their extended datasets, Paris106k and Oxford105k. These experimental results indicate that the global feature extracted by the new method can better describe an image.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Classification based Knee Bone Detection using Context Information (문맥 정보를 이용한 분류 기반 무릎 뼈 검출 기법)

  • Shin, Seungyeon;Park, Sanghyun;Yun, Il Dong;Lee, Sang Uk
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.401-408
    • /
    • 2013
  • In this paper, we propose a method that automatically detects organs having similar appearances in medical images by learning both context and appearance features. Since only the appearance feature is used to learn the classifier in most existing detection methods, detection errors occur when the medical images include multiple organs having similar appearances. In the proposed method, based on the probabilities acquired by the appearance-based classifier, new classifier containing the context feature is created by iteratively learning the characteristics of probability distribution around the interest voxel. Furthermore, both the efficiency and the accuracy are improved through 'region based voting scheme' in test stage. To evaluate the performance of the proposed method, we detect femur and tibia which have similar appearance from SKI10 knee joint dataset. The proposed method outperformed the detection method only using appearance feature in aspect of overall detection performance.

Feature Extraction by Line-clustering Segmentation Method (선군집분할방법에 의한 특징 추출)

  • Hwang Jae-Ho
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.401-408
    • /
    • 2006
  • In this paper, we propose a new class of segmentation technique for feature extraction based on the statistical and regional classification at each vertical or horizontal line of digital image data. Data is processed and clustered at each line, different from the point or space process. They are designed to segment gray-scale sectional images using a horizontal and vertical line process due to their statistical and property differences, and to extract the feature. The techniques presented here show efficient results in case of the gray level overlap and not having threshold image. Such images are also not easy to be segmented by the global or local threshold methods. Line pixels inform us the sectionable data, and can be set according to cluster quality due to the differences of histogram and statistical data. The total segmentation on line clusters can be obtained by adaptive extension onto the horizontal axis. Each processed region has its own pixel value, resulting in feature extraction. The advantage and effectiveness of the line-cluster approach are both shown theoretically and demonstrated through the region-segmental carotid artery medical image processing.

A Background Segmentation and Feature Point Extraction Method of Human Motion Recognition (동작인식을 위한 배경 분할 및 특징점 추출 방법)

  • You, Hwi-Jong;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.11 no.2
    • /
    • pp.161-166
    • /
    • 2011
  • In this paper, we propose a novel background segmentation and feature point extraction method of a human motion for the augmented reality game. First, our method transforms input image from RGB color space to HSV color space, then segments a skin colored area using double threshold of H, S value. And it also segments a moving area using the time difference images and then removes the noise of the area using the Hessian affine region detector. The skin colored area with the moving area is segmented as a human motion. Next, the feature points for the human motion are extracted by calculating the center point for each block in the previously obtained image. The experiments on various input images show that our method is capable of correct background segmentation and feature points extraction 12 frames per second.

A Computer Vision-based Method for Detecting Rear Vehicles at Night (컴퓨터비전 기반의 야간 후방 차량 탐지 방법)

  • 노광현;문순환;한민홍
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.3
    • /
    • pp.181-189
    • /
    • 2004
  • This paper describes the method for detecting vehicles in the rear and rear-side at night by using headlight features. A headlight is the outstanding feature that can be used to discriminate a vehicle from a dark background. In the segmentation process, a night image is transformed to a binary image that consists of black background and white regions by gray-level thresholding, and noise in the binary image is eliminated by a morphological operation. In the feature extraction process, the geometric features and moment invariant features of a headlight are defined, and they are measured in each segmented region. Regions that are not appropriate to a headlight are filtered by using geometric feature measurement. In region classification, a pair of headlights is detected by using relational features based on the symmetry of a pair of headlights. Experimental results show that this method is very applicable to an approaching vehicle detection system at nighttime.

  • PDF

Robust Traffic Monitoring System by Spatio-Temporal Image Analysis (시공간 영상 분석에 의한 강건한 교통 모니터링 시스템)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1534-1542
    • /
    • 2004
  • A novel vision-based scheme of extracting real-time traffic information parameters is presented. The method is based on a region classification followed by a spatio-temporal image analysis. The detection region images for each traffic lane are classified into one of the three categories: the road, the vehicle, and the shadow, using statistical and structural features. Misclassification in a frame is corrected by using temporally correlated features of vehicles in the spatio-temporal image. Since only local images of detection regions are processed, the real-time operation of more than 30 frames per second is realized without using dedicated parallel processors, while ensuring detection performance robust to the variation of weather conditions, shadows, and traffic load.

A study on Robust Feature Image for Texture Classification and Detection (텍스쳐 분류 및 검출을 위한 강인한 특징이미지에 관한 연구)

  • Kim, Young-Sub;Ahn, Jong-Young;Kim, Sang-Bum;Hur, Kang-In
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.133-138
    • /
    • 2010
  • In this paper, we make up a feature image including spatial properties and statistical properties on image, and format covariance matrices using region variance magnitudes. By using it to texture classification, this paper puts a proposal for tough texture classification way to illumination, noise and rotation. Also we offer a way to minimalize performance time of texture classification using integral image expressing middle image for fast calculation of region sum. To estimate performance evaluation of proposed way, this paper use a Brodatz texture image, and so conduct a noise addition and histogram specification and create rotation image. And then we conduct an experiment and get better performance over 96%.