• Title/Summary/Keyword: 눈 영역 추출

Search Result 227, Processing Time 0.032 seconds

Design of RBFNNs Pattern Classifier Realized with the Aid of Face Features Detection (얼굴 특징 검출에 의한 RBFNNs 패턴분류기의 설계)

  • Park, Chan-Jun;Kim, Sun-Hwan;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.120-126
    • /
    • 2016
  • In this study, we propose a method for effectively detecting and recognizing the face in image using RBFNNs pattern classifier and HCbCr-based skin color feature. Skin color detection is computationally rapid and is robust to pattern variation for face detection, however, the objects with similar colors can be mistakenly detected as face. Thus, in order to enhance the accuracy of the skin detection, we take into consideration the combination of the H and CbCr components jointly obtained from both HSI and YCbCr color space. Then, the exact location of the face is found from the candidate region of skin color by detecting the eyes through the Haar-like feature. Finally, the face recognition is performed by using the proposed FCM-based RBFNNs pattern classifier. We show the results as well as computer simulation experiments carried out by using the image database of Cambridge ICPR.

3D Face Recognition using Longitudinal Section and Transection (종단면과 횡단면을 이용한 3차원 얼굴 인식)

  • 이영학;박건우;이태홍
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.885-893
    • /
    • 2003
  • In this paper, a new practical implementation of a person verification system using features of longitudinal section and transection and other facial, rotation compensated 3D face image, is proposed. The approach works by finding the nose tip that has a protrusion shape on the face. In feature recognition of 3D face image, one has to take into consideration the orientated frontal posture to normalize. Next, the special points in regions, such as nose, eyes and mouth are detected. The depth of nose, the area of nose and the volume of nose based both on the 3 longitudinal section and a transection are calculated. The eye interval and mouth width are also computed. Finally, the 12 features on the face were extracted. The Ll measure for comparing two feature vectors were used, because it is simple and robust. In the experimental results, proposed method achieves recognition rate of 95.5% for the longitudinal section and transection.

A Study on Facial Wrinkle Detection using Active Appearance Models (AAM을 이용한 얼굴 주름 검출에 관한 연구)

  • Lee, Sang-Bum;Kim, Tae-Mook
    • Journal of Digital Convergence
    • /
    • v.12 no.7
    • /
    • pp.239-245
    • /
    • 2014
  • In this paper, a weighted value wrinkle detection method is suggested based on the analysis on the entire facial features such as face contour, face size, eyes and ears. Firstly, the main facial elements are detected with AAM method entirely from the input screen images. Such elements are mainly composed of shape-based and appearance methods. These are used for learning the facial model and for matching the face from new screen images based on the learned models. Secondly, the face and background are separated in the screen image. Four points with the biggest possibilities for wrinkling are selected from the face and high wrinkle weighted values are assigned to them. Finally, the wrinkles are detected by applying Canny edge algorithm for the interested points of weighted value. The suggested algorithm adopts various screen images for experiment. The experiments display the excellent results of face and wrinkle detection in the most of the screen images.

Skew correction of face image using eye components extraction (눈 영역 추출에 의한 얼굴 기울기 교정)

  • Yoon, Ho-Sub;Wang, Min;Min, Byung-Woo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.12
    • /
    • pp.71-83
    • /
    • 1996
  • This paper describes facial component detection and skew correction algorithm for face recognition. We use a priori knowledge and models about isolated regions to detect eye location from the face image captured in natural office environments. The relations between human face components are represented by several rules. We adopt an edge detection algorithm using sobel mask and 8-connected labelling algorith using array pointers. A labeled image has many isolated components. initially, the eye size rules are used. Eye size rules are not affected much by irregular input image conditions. Eye size rules size, and limited in the ratio between gorizontal and vertical sizes. By the eye size rule, 2 ~ 16 candidate eye components can be detected. Next, candidate eye parirs are verified by the information of location and shape, and one eye pair location is decided using face models about eye and eyebrow. Once we extract eye regions, we connect the center points of the two eyes and calculate the angle between them. Then we rotate the face to compensate for the angle so that the two eyes on a horizontal line. We tested 120 input images form 40 people, and achieved 91.7% success rate using eye size rules and face model. The main reasons of the 8.3% failure are due to components adjacent to eyes such as eyebrows. To detect facial components from the failed images, we are developing a mouth region processing module.

  • PDF

Flavor Characteristics of Volatile Compounds from Shrimp by GC Olfactometry (GCO) (GC Olfactometry를 이용한 새우의 휘발성성분 특성평가)

  • 이미정;이신조;조지은;정은주;김명찬;김경환;이양봉
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.31 no.6
    • /
    • pp.953-957
    • /
    • 2002
  • Volatile compounds from shrimp whole body (SWB) and shrimp shell waste (SSW) were isolated, and identified by the combination of SDE (simultaneous steam distillation and solvent extraction), GC (gas chromatography, HP-5890 plus)and MSD (mass selective detecter) or olfactometry. The peak numbers isolated from SWB and SSW were 20 and 46, respectively. The amounts of the volatile compounds isolated from SSW were higher than those of SWB. SWB produced more low-boiling compounds below 7$0^{\circ}C$ and SSW did more high boiling compounds over 10$0^{\circ}C$. The volatile compounds identified from SSW were 9 pyrazines,5 acids,4 aldehydes, and 4 alcohols. These volatile compounds were evaluated by aroma extraction dilution analysis and gas chromatography olfactometry (GCO). Some compounds which were not detected by GC-FID and GC-MSD were found to be a strong shrimp flavor of log$_3$ FD 3 value by GCO. Strong shrimp odors were detected in low temperature while nutty aromatic odors and unpleasant oily smells were found in high temperature.

Error Recovery by the Classification of Candidate Motion Vectors for H.263 Video Communications (후보벡터 분류에 의한 영상 에러 복원)

  • Son, Nam-Rye;Lee, Guee-Sang
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.163-168
    • /
    • 2003
  • In transmitting compressed video bit-stream over Internet, packet loss causes error propagation in both spatial and temporal domain, which in turn leads to severe degradation in image quality. In this paper, a new approach for the recovery of lost or erroneous Motion Vector(MV)s by classifying the movements of neighboring blocks by their homogeneity is proposed. MVs of neighboring blocks are classified according to the direction of MVs and a representative value for each class is determined to obtain the candidate MV set. By computing the distortion of the candidates, a MV with the minimum distortion is selected. Experimental results show that the proposed algorithm exhibits better performance in many cases than existing methods.

Recovering Corrupted Motion Vectors using Discontinuity Features of an Image (영상의 불연속 특성을 이용한 손상된 움직임 벡터 복원 기법)

  • 손남례;이귀상
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.3
    • /
    • pp.298-304
    • /
    • 2004
  • In transmitting a compressed video bit-stream over Internet, a packet loss causes an error propagation in both spatial and temporal domain, which in turn leads to a severe degradation in image quality. In this paper, a new error concealment algorithm is proposed to repair damaged portions of the video frames in the receiver. Conventional BMA(Boundary Matching Algorithm) assumes that the pixels on the boundary of the missing block and its neighboring blocks are very similar, but has no consideration of edges t)r discontinuity across the boundary. In our approach, the edges are detected across the boundary of the lost or erroneous block. Once the edges are detected and the orientation of each edge is found, only the pixel difference along the expected edges across the boundary is measured instead of calculating differences between all adjacent pixels on the boundary. Therefore, the proposed approach needs very few computations and the experiment shows an improvement of the performance over the conventional BMA in terms of both subjective and objective quality of video sequences.

Real-time Vital Signs Measurement System using Facial Image Data (안면 이미지 데이터를 이용한 실시간 생체징후 측정시스템)

  • Kim, DaeYeol;Kim, JinSoo;Lee, KwangKee
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.132-142
    • /
    • 2021
  • The purpose of this study is to present an effective methodology that can measure heart rate, heart rate variability, oxygen saturation, respiration rate, mental stress level, and blood pressure using mobile front camera that can be accessed most in real life. Face recognition was performed in real-time using Blaze Face to acquire facial image data, and the forehead was designated as ROI (Region Of Interest) using feature points of the eyes, nose, and mouth, and ears. Representative values for each channel of the ROI were generated and aligned on the time axis to measure vital signs. The vital signs measurement method was based on Fourier transform, and noise was removed and filtered according to the desired vital signs to increase the accuracy of the measurement. To verify the results, vital signs measured using facial image data were compared with pulse oximeter contact sensor, and TI non-contact sensor. As a result of this work, the possibility of extracting a total of six vital signs (heart rate, heart rate variability, oxygen saturation, respiratory rate, stress, and blood pressure) was confirmed through facial images.

f-MRI with Three-Dimensional Visual Stimulation (삼차원 시각 자극을 이용한 f-MRI 연구)

  • Kim C.Y.;Park H.J.;Oh S.J.;Ahn C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.9 no.1
    • /
    • pp.24-29
    • /
    • 2005
  • Purpose : Instead of conventional two-dimensional (2-D) visual stimuli, three-dimensional (3-D) visual stimuli with stereoscopic vision were employed for the study of functional Magnetic Resonance Imaging (f-MRI). In this paper f-MRI with 3-D visual stimuli is investigated in comparison with f-MRI with 2-D visual stimuli. Materials and Methods : The anaglyph which generates stereoscopic vision by viewing color coded images with red-blue glasses is used for 3-D visual stimuli. Two-dimensional visual stimuli are also used for comparison. For healthy volunteers, f-MRI experiments were performed with 2-D and 3-D visual stimuli at 3.0 Tesla MRI system. Results : Occipital lobes were activated by the 3-D visual stimuli similarly as in the f-MRI with the conventional 2-D visual stimuli. The activated regions by the 3-D visual stimuli were, however, larger than those by the 2-D visual stimuli by $18\%$. Conclusion : Stereoscopic vision is the basis of the three-dimensional human perception. In this paper 3-D visual stimuli were applied using the anaglyph. Functional MRI was performed with 2-D and 3-D visual stimuli at 3.0 Tesla whole body MRI system. The occipital lobes activated by the 3-D visual stimuli appeared larger than those by the 2-D visual stimuli by about $18\%$. This is due to the more complex character of the 3-D human vision compared to 2-D vision. The f-MRI with 3-D visual stimuli may be useful in various fields using 3-D human vision such as virtual reality, 3-D display, and 3-D multimedia contents.

  • PDF

A Study on Face Awareness with Free size using Multi-layer Neural Network (다층신경망을 이용한 임의의 크기를 가진 얼굴인식에 관한 연구)

  • Song, Hong-Bok;Seol, Ji-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.149-162
    • /
    • 2005
  • This paper suggest a way to detect a specific wanted figure in public places such as subway stations and banks by comparing color face images extracted from the real time CCTV with the face images of designated specific figures. Assuming that the characteristic of the surveillance camera allows the face information in screens to change arbitrarily and to contain information on numerous faces, the accurate detection of the face area was focused. To solve this problem, the normalization work using subsampling with $20{\times}20$ pixels on arbitrary face images, which is based on the Perceptron Neural Network model suggested by R. Rosenblatt, created the effect of recogning the whole face. The optimal linear filter and the histogram shaper technique were employed to minimize the outside interference such as lightings and light. The addition operation of the egg-shaped masks was added to the pre-treatment process to minimize unnecessary work. The images finished with the pre-treatment process were divided into three reception fields and the information on the specific location of eyes, nose, and mouths was determined through the neural network. Furthermore, the precision of results was improved by constructing the three single-set network system with different initial values in a row.