• Title/Summary/Keyword: Image feature extraction

Search Result 1,026, Processing Time 0.024 seconds

Improved Skin Color Extraction Based on Flood Fill for Face Detection (얼굴 검출을 위한 Flood Fill 기반의 개선된 피부색 추출기법)

  • Lee, Dong Woo;Lee, Sang Hun;Han, Hyun Ho;Chae, Gyoo Soo
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.6
    • /
    • pp.7-14
    • /
    • 2019
  • In this paper, we propose a Cascade Classifier face detection method using the Haar-like feature, which is complemented by the Flood Fill algorithm for lossy areas due to illumination and shadow in YCbCr color space extraction. The Cascade Classifier using Haar-like features can generate noise and loss regions due to lighting, shadow, etc. because skin color extraction using existing YCbCr color space in image only uses threshold value. In order to solve this problem, noise is removed by erosion and expansion calculation, and the loss region is estimated by using the Flood Fill algorithm to estimate the loss region. A threshold value of the YCbCr color space was further allowed for the estimated area. For the remaining loss area, the color was filled in as the average value of the additional allowed areas among the areas estimated above. We extracted faces using Haar-like Cascade Classifier. The accuracy of the proposed method is improved by about 4% and the detection rate of the proposed method is improved by about 2% than that of the Haar-like Cascade Classifier by using only the YCbCr color space.

Development of On-line Quality Sorting System for Dried Oak Mushroom - 3rd Prototype-

  • 김철수;김기동;조기현;이정택;김진현
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.8-15
    • /
    • 2003
  • In Korea, quality evaluation of dried oak mushrooms are done first by classifying them into more than 10 different categories based on the state of opening of the cap, surface pattern, and colors. And mushrooms of each category are further classified into 3 or 4 groups based on its shape and size, resulting into total 30 to 40 different grades. Quality evaluation and sorting based on the external visual features are usually done manually. Since visual features of mushroom affecting quality grades are distributed over the entire surface of the mushroom, both front (cap) and back (stem and gill) surfaces should be inspected thoroughly. In fact, it is almost impossible for human to inspect every mushroom, especially when they are fed continuously via conveyor. In this paper, considering real time on-line system implementation, image processing algorithms utilizing artificial neural network have been developed for the quality grading of a mushroom. The neural network based image processing utilized the raw gray value image of fed mushrooms captured by the camera without any complex image processing such as feature enhancement and extraction to identify the feeding state and to grade the quality of a mushroom. Developed algorithms were implemented to the prototype on-line grading and sorting system. The prototype was developed to simplify the system requirement and the overall mechanism. The system was composed of automatic devices for mushroom feeding and handling, a set of computer vision system with lighting chamber, one chip microprocessor based controller, and pneumatic actuators. The proposed grading scheme was tested using the prototype. Network training for the feeding state recognition and grading was done using static images. 200 samples (20 grade levels and 10 per each grade) were used for training. 300 samples (20 grade levels and 15 per each grade) were used to validate the trained network. By changing orientation of each sample, 600 data sets were made for the test and the trained network showed around 91 % of the grading accuracy. Though image processing itself required approximately less than 0.3 second depending on a mushroom, because of the actuating device and control response, average 0.6 to 0.7 second was required for grading and sorting of a mushroom resulting into the processing capability of 5,000/hr to 6,000/hr.

  • PDF

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

Research on Classification of Sitting Posture with a IMU (하나의 IMU를 이용한 앉은 자세 분류 연구)

  • Kim, Yeon-Wook;Cho, Woo-Hyeong;Jeon, Yu-Yong;Lee, Sangmin
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.3
    • /
    • pp.261-270
    • /
    • 2017
  • Bad sitting postures are known to cause for a variety of diseases or physical deformation. However, it is not easy to fit right sitting posture for long periods of time. Therefore, methods of distinguishing and inducing good sitting posture have been constantly proposed. Proposed methods were image processing, using pressure sensor attached to the chair, and using the IMU (Internal Measurement Unit). The method of using IMU has advantages of simple hardware configuration and free of various constraints in measurement. In this paper, we researched on distinguishing sitting postures with a small amount of data using just one IMU. Feature extraction method was used to find data which contribution is the least for classification. Machine learning algorithms were used to find the best position to classify and we found best machine learning algorithm. Used feature extraction method was PCA(Principal Component Analysis). Used Machine learning models were five : SVM(Support Vector Machine), KNN(K Nearest Neighbor), K-means (K-means Algorithm) GMM (Gaussian Mixture Model), and HMM (Hidden Marcov Model). As a result of research, back neck is suitable position for classification because classification rate of it was highest in every model. It was confirmed that Yaw data which is one of the IMU data has the smallest contribution to classification rate using PCA and there was no changes in classification rate after removal it. SVM, KNN are suitable for classification because their classification rate are higher than the others.

Improve the Performance of People Detection using Fisher Linear Discriminant Analysis in Surveillance (서베일런스에서 피셔의 선형 판별 분석을 이용한 사람 검출의 성능 향상)

  • Kang, Sung-Kwan;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.295-302
    • /
    • 2013
  • Many reported methods assume that the people in an image or an image sequence have been identified and localization. People detection is one of very important variable to affect for the system's performance as the basis technology about the detection of other objects and interacting with people and computers, motion recognition. In this paper, we present an efficient linear discriminant for multi-view people detection. Our approaches are based on linear discriminant. We define training data with fisher Linear discriminant to efficient learning method. People detection is considerably difficult because it will be influenced by poses of people and changes in illumination. This idea can solve the multi-view scale and people detection problem quickly and efficiently, which fits for detecting people automatically. In this paper, we extract people using fisher linear discriminant that is hierarchical models invariant pose and background. We estimation the pose in detected people. The purpose of this paper is to classify people and non-people using fisher linear discriminant.

Prostate Object Extraction in Ultrasound Volume Using Wavelet Transform (초음파 볼륨에서 웨이브렛 변환을 이용한 전립선 객체 추출)

  • Oh Jong-Hwan;Kim Sang-Hyun;Kim Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-77
    • /
    • 2006
  • This thesis proposes an effi챠ent method for extracting a prostate volume from 3D ultrasound image by using wavelet transform and SVM classification. In the proposed method, a modulus image for each 2D slice is generated by averaging detail images of horizontal and vertical orientations at several scales, which has the sharpest local maxima and the lowest noise power compared to those of all single scales. Prostate contour vertices are determined accurately using a SVM classifier, where feature vectors are composed of intensity and texture moments investigated along radial lines. Experimental results show that the proposed method yields absolute mean distance of on average 1.89 pixels when the contours obtained manually by an expert are used as reference data.

Advanced Seam Finding Algorithm for Stitching of 360 VR Images (개선된 Seam Finder를 이용한 360 VR 이미지 스티칭 기술)

  • Son, Hui-Jeong;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.656-668
    • /
    • 2018
  • VR (Virtual Reality) is one of the important research topics in the field of multimedia application system. The quality of the visual data composed from multiple pictures depends on the performance of stitching technique. The stitching module consists of feature extraction, mapping of those, warping, seam finding, and blending. In this paper, we proposed a preprocessing scheme to provide the efficient mask for seam finder. Incorporating of the proposed mask removes the distortion, such as ghost and blurring, in the stitched image. The simulation results show that the proposed algorithm outperforms other conventional techniques in the respect of the subjective quality and the computational complexity.

A Method of Detecting Character Data through a Adaboost Learning Method (에이다부스트 학습을 이용한 문자 데이터 검출 방법)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.655-661
    • /
    • 2017
  • It is a very important task to extract character regions contained in various input color images, because characters can provide significant information representing the content of an image. In this paper, we propose a new method for extracting character regions from various input images using MCT features and an AdaBoost algorithm. Using geometric features, the method extracts actual character regions by filtering out non-character regions from among candidate regions. Experimental results show that the suggested algorithm accurately extracts character regions from input images. We expect the suggested algorithm will be useful in multimedia and image processing-related applications, such as store signboard detection and car license plate recognition.

Robust Detection of Body Areas Using an Adaboost Algorithm (에이다부스트 알고리즘을 이용한 인체 영역의 강인한 검출)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.403-409
    • /
    • 2016
  • Recently, harmful content (such as images and photos of nudes) has been widely distributed. Therefore, there have been various studies to detect and filter out such harmful image content. In this paper, we propose a new method using Haar-like features and an AdaBoost algorithm for robustly extracting navel areas in a color image. The suggested algorithm first detects the human nipples through color information, and obtains candidate navel areas with positional information from the extracted nipple areas. The method then selects real navel regions based on filtering using Haar-like features and an AdaBoost algorithm. Experimental results show that the suggested algorithm detects navel areas in color images 1.6 percent more robustly than an existing method. We expect that the suggested navel detection algorithm will be usefully utilized in many application areas related to 2D or 3D harmful content detection and filtering.

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF