• Title/Summary/Keyword: 히스토그램 모델링

Search Result 34, Processing Time 0.027 seconds

Virtual dress up and tuck in Top on Smart Mirror (스마트 거울기반 의상 가상착의와 상의 내어입기)

  • Cho, Jae-Hyeon;Moon, Nam-Mee
    • Annual Conference of KIPS
    • /
    • 2017.11a
    • /
    • pp.1189-1191
    • /
    • 2017
  • '스마트'라는 단어가 대중화가 되면서 전자기기 뿐만 아니라 거울에도 스마트가 붙게 되었다. 하지만 스마트 거울에 다양한 기능이 추가되면서 '꾸미는 것을 돕는다'라는 거울 본연의 기능이 부가적인 요소가 되는 경우가 있거나 매장 내에 모델링이 끝난 옷을 보여주지만 집에 있는 자신의 옷을 보는 것이 부족한 경우가 많다. 본 논문에서는 OpenCV를 활용하여 옷을 여러벌 갈아입으면서 코디를 할 때 번거로움을 줄이고자 거울 앞에서 찍으면 프레임과 전경 추출 알고리즘을 사용하여 사용자의 옷을 추출하고 추출된 옷의 정확도를 위해 보정작업을 추가한다. 윤곽선의 노이즈를 줄이기 위해 Morphology 필터링을 사용하고 Clahe 히스토그램 균일화를 통해 옷의 선명도를 높혔다. 추가적으로 가상으로 띄워주는 기능과 옷을 보여줄 때 HSV 모델의 특성을 활용하여 채도나 명도의 변화의 상관없이 색을 추출하여 상의와 하의를 분리하여 상의를 내어입는 기능도 선택할 수 있게 구현 하였다.

Musical Instrument Recognition for the Categorization of UCC Music Source (UCC 음원분류를 위한 연주악기 분류에 대한 연구)

  • Kwon, Soon-Il;Park, Wan-Joo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.2
    • /
    • pp.107-114
    • /
    • 2010
  • A guitar, a piano, and a violin are popular musical instruments for User Created Contents(UCC). However the patterns of audio signal generated by a guitar and a piano are too similar to differentiate. The difference between two musical instruments can be found by analyzing the frequency variation per each band near signal peaks. The distribution of probability on the existence of signal peaks based on Cumulative Histogram were applied to musical instrument recognition. Experiments with statistical models of the frequency variation per each band near signal peaks showed the 14% improvement of musical instrument recognition.

TFT-LCD Defect Detection based on Histogram Distribution Modeling (히스토그램 분포 모델링 기반 TFT-LCD 결함 검출)

  • Gu, Eunhye;Park, Kil-Houm;Lee, Jong-Hak;Ryu, Gang-Soo;Kim, Jungjoon
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1519-1527
    • /
    • 2015
  • TFT-LCD automatic defect inspection system for detecting defects in place of the visual tester does pre-processing, candidate defect pixel detection, and recognition and classification through a blob analysis. An over-detection result of defects acts as an undue burden of blob analysis for recognition and classification. In this paper, we propose defect detection method based on the histogram distribution modeling of TFT-LCD image to minimize over-detection of candidate defective pixels. Primary defect candidate pixels are detected estimating the skewness of the luminance distribution histogram of the background pixels. Based on the detected defect pixels, the defective pixels other than noise pixels are detected using the distribution histogram model of the local area. Experimental results confirm that the proposed method shows an excellent defect detection result on the image containing the various types of defects and the reduction of the degree of over-detection as well.

Fast Text Line Segmentation Model Based on DCT for Color Image (컬러 영상 위에서 DCT 기반의 빠른 문자 열 구간 분리 모델)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.6
    • /
    • pp.463-470
    • /
    • 2010
  • We presented a very fast and robust method of text line segmentation based on the DCT blocks of color image without decompression and binary transformation processes. Using DC and another three primary AC coefficients from block DCT we created a gray-scale image having reduced size by 8x8. In order to detect and locate white strips between text lines we analyzed horizontal and vertical projection profiles of the image and we applied a direct markov model to recover the missing white strips by estimating hidden periodicity. We presented performance results. The results showed that our method was 40 - 100 times faster than traditional method.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Feature based Pre-processing Method to compensate color mismatching for Multi-view Video (다시점 비디오의 색상 성분 보정을 위한 특징점 기반의 전처리 방법)

  • Park, Sung-Hee;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2527-2533
    • /
    • 2011
  • In this paper we propose a new pre-processing algorithm applied to multi-view video coding using color compensation algorithm based on image features. Multi-view images have a difference between neighboring frames according to illumination and different camera characteristics. To compensate this color difference, first we model the characteristics of cameras based on frame's feature from each camera and then correct the color difference. To extract corresponding features from each frame, we use Harris corner detection algorithm and characteristic coefficients used in the model is estimated by using Gauss-Newton algorithm. In this algorithm, we compensate RGB components of target images, separately from the reference image. The experimental results with many test images show that the proposed algorithm peformed better than the histogram based algorithm as much as 14 % of bit reduction and 0.5 dB ~ 0.8dB of PSNR enhancement.

Automation of Building Extraction and Modeling Using Airborne LiDAR Data (항공 라이다 데이터를 이용한 건물 모델링의 자동화)

  • Lim, Sae-Bom;Kim, Jung-Hyun;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2009
  • LiDAR has capability of rapid data acquisition and provides useful information for reconstructing surface of the Earth. However, Extracting information from LiDAR data is not easy task because LiDAR data consist of irregularly distributed point clouds of 3D coordinates and lack of semantic and visual information. This thesis proposed methods for automatic extraction of buildings and 3D detail modeling using airborne LiDAR data. As for preprocessing, noise and unnecessary data were removed by iterative surface fitting and then classification of ground and non-ground data was performed by analyzing histogram. Footprints of the buildings were extracted by tracing points on the building boundaries. The refined footprints were obtained by regularization based on the building hypothesis. The accuracy of building footprints were evaluated by comparing with 1:1,000 digital vector maps. The horizontal RMSE was 0.56m for test areas. Finally, a method of 3D modeling of roof superstructure was developed. Statistical and geometric information of the LiDAR data on building roof were analyzed to segment data and to determine roof shape. The superstructures on the roof were modeled by 3D analytical functions that were derived by least square method. The accuracy of the 3D modeling was estimated using simulation data. The RMSEs were 0.91m, 1.43m, 1.85m and 1.97m for flat, sloped, arch and dome shapes, respectively. The methods developed in study show that the automation of 3D building modeling process was effectively performed.

3D Modeling from 2D Stereo Image using 2-Step Hybrid Method (2단계 하이브리드 방법을 이용한 2D 스테레오 영상의 3D 모델링)

  • No, Yun-Hyang;Go, Byeong-Cheol;Byeon, Hye-Ran;Yu, Ji-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.7
    • /
    • pp.501-510
    • /
    • 2001
  • Generally, it is essential to estimate exact disparity for the 3D modeling from stereo images. Because existing methods calculate disparities from a whole image, they require too much cimputational time and bring about the mismatching problem. In this article, using the characteristic that the disparity vectors in stereo images are distributed not equally in a whole image but only exist about the background and obhect, we do a wavelet transformation on stereo images and estimate coarse disparity fields from the reduced lowpass field using area-based method at first-step. From these coarse disparity vectors, we generate disparity histogram and then separate object from background area using it. Afterwards, we restore only object area to the original image and estimate dense and accurate disparity by our two-step pixel-based method which does not use pixel brightness but use second gradient. We also extract feature points from the separated object area and estimate depth information by applying disparity vectors and camera parameters. Finally, we generate 3D model using both feature points and their z coordinates. By using our proposed, we can considerably reduce the computation time and estimate the precise disparity through the additional pixel-based method using LOG filter. Furthermore, our proposed foreground/background method can solve the mismatching problem of existing Delaunay triangulation and generate accurate 3D model.

  • PDF

Real-time passive millimeter wave image segmentation for concealed object detection (은닉 물체 검출을 위한 실시간 수동형 밀리미터파 영상 분할)

  • Lee, Dong-Su;Yeom, Seok-Won;Lee, Mun-Kyo;Jung, Sang-Won;Chang, Yu-Shin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.2C
    • /
    • pp.181-187
    • /
    • 2012
  • Millimeter wave (MMW) readily penetrates fabrics, thus it can be used to detect objects concealed under clothing. A passive MMW imaging system can operate as a stand-off type sensor that scans people in both indoors and outdoors. However, because of the diffraction limit and low signal level, the imaging system often suffers from low image quality. Therefore, suitable statistical analysis and computational processing would be required for automatic analysis of the images. In this paper, a real-time concealed object detection is addressed by means of the multi-level segmentation. The histogram of the image is modeled with a Gaussian mixture distribution, and hidden object areas are segmented by a multi-level scheme involving $k$-means, the expectation-maximization algorithm, and a decision rule. The complete algorithm has been implemented in C++ environments on a standard computer for a real-time process. Experimental and simulation results confirm that the implemented system can achieve the real-time detection of concealed objects.

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.