• Title/Summary/Keyword: Color- histogram

Search Result 500, Processing Time 0.022 seconds

Real-time Ball Detection and Tracking with P-N Learning in Soccer Game (P-N 러닝을 이용한 실시간 축구공 검출 및 추적)

  • Huang, Shuai-Jie;Li, Gen;Lee, Yill-Byung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.447-450
    • /
    • 2011
  • This paper shows the application of P-N Learning [4] method in the soccer ball detection and improvement for increasing the speed of processing. In the P-N learning, the learning process is guided by positive (P) and negative (N) constraints which restrict the labeling of the unlabeled data, identify examples that have been classified in contradiction with structural constraints and augment the training set with the corrected samples in an iterative process. But for the long-view in the soccer game, P-N learning will produce so many ferns that more time is spent than other methods. We propose that color histogram of each frame is constructed to delete the unnecessary details in order to decreasing the number of feature points. We use the mask to eliminate the gallery region and Line Hough Transform to remove the line and adjust the P-N learning's parameters to optimize accurate and speed.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Coated Tongue Region Extraction using the Fluorescence Response of the Tongue Coating by Ultraviolet Light Source (설태의 자외선 형광 반응을 이용한 설태 영역 추출)

  • Choi, Chang-Yur;Lee, Woo-Beom;Hong, You-Sik;Nam, Dong-Hyun;Lee, Sang-Suk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.181-188
    • /
    • 2012
  • An effective extraction method for extracting a coated tongue is proposed in this paper, which is used as the diagnostic criteria in the tongue diagnosis. Proposed method uses the fluorescence response characteristics of the coated tongue that is occurred by using the ultraviolet light. Specially, this method can solved the previous problems including the issue in the limits of the diagnosis environment and in the objectivity of the diagnosis results. In our method, original tongue image is acquired by using the ultraviolet light, and binarization is performed by thresholding a valley-points in the histogram that corresponds to the color difference of tongue body and tongue coating. Final view image is presented to the oriental doctor, after applying the canny-edge algorithm to the binary image, and edge image is added to the original image. In order to evaluate the performance of the our proposed method, after building a various tongue image, we compared the true region of coated tongue by the oriental doctor's hand with the extracted region by the our method. As a result, the proposed method showed the average 87.87% extraction ratio. The shape of the extracted coated tongue region showed also significantly higher similarity.

Emotion-based Video Scene Retrieval using Interactive Genetic Algorithm (대화형 유전자 알고리즘을 이용한 감성기반 비디오 장면 검색)

  • Yoo Hun-Woo;Cho Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.6
    • /
    • pp.514-528
    • /
    • 2004
  • An emotion-based video scene retrieval algorithm is proposed in this paper. First, abrupt/gradual shot boundaries are detected in the video clip representing a specific story Then, five video features such as 'average color histogram' 'average brightness', 'average edge histogram', 'average shot duration', and 'gradual change rate' are extracted from each of the videos and mapping between these features and the emotional space that user has in mind is achieved by an interactive genetic algorithm. Once the proposed algorithm has selected videos that contain the corresponding emotion from initial population of videos, feature vectors from the selected videos are regarded as chromosomes and a genetic crossover is applied over them. Next, new chromosomes after crossover and feature vectors in the database videos are compared based on the similarity function to obtain the most similar videos as solutions of the next generation. By iterating above procedures, new population of videos that user has in mind are retrieved. In order to show the validity of the proposed method, six example categories such as 'action', 'excitement', 'suspense', 'quietness', 'relaxation', 'happiness' are used as emotions for experiments. Over 300 commercial videos, retrieval results show 70% effectiveness in average.

Assessment of Fire-Damaged Mortar using Color image Analysis (색도 이미지 분석을 이용한 화재 피해 모르타르의 손상 평가)

  • Park, Kwang-Min;Lee, Byung-Do;Yoo, Sung-Hun;Ham, Nam-Hyuk;Roh, Young-Sook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.3
    • /
    • pp.83-91
    • /
    • 2019
  • The purpose of this study is to assess a fire-damaged concrete structure using a digital camera and image processing software. To simulate it, mortar and paste samples of W/C=0.5(general strength) and 0.3(high strength) were put into an electric furnace and simulated from $100^{\circ}C$ to $1000^{\circ}C$. Here, the paste was processed into a powder to measure CIELAB chromaticity, and the samples were taken with a digital camera. The RGB chromaticity was measured by color intensity analyzer software. As a result, the residual compressive strength of W/C=0.5 and 0.3 was 87.2 % and 86.7 % at the heating temperature of $400^{\circ}C$. However there was a sudden decrease in strength at the temperature above $500^{\circ}C$, while the residual compressive strength of W/C=0.5 and 0.3 was 55.2 % and 51.9 % of residual strength. At the temperature $700^{\circ}C$ or higher, W/C=0.5 and W/C=0.3 show 26.3% and 27.8% of residual strength, so that the durability of the structure could not be secured. The results of $L^*a^*b$ color analysis show that $b^*$ increases rapidly after $700^{\circ}C$. It is analyzed that the intensity of yellow becomes strong after $700^{\circ}C$. Further, the RGB analysis found that the histogram kurtosis and frequency of Red and Green increases after $700^{\circ}C$. It is analyzed that number of Red and Green pixels are increased. Therefore, it is deemed possible to estimate the degree of damage by checking the change in yellow($b^*$ or R+G) when analyzing the chromaticity of the fire-damaged concrete structures.

Development of Preliminary Quality Assurance Software for $GafChromic^{(R)}$ EBT2 Film Dosimetry ($GafChromic^{(R)}$ EBT2 Film Dosimetry를 위한 품질 관리용 초기 프로그램 개발)

  • Park, Ji-Yeon;Lee, Jeong-Woo;Choi, Kyoung-Sik;Hong, Semie;Park, Byung-Moon;Bae, Yong-Ki;Jung, Won-Gyun;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.1
    • /
    • pp.113-119
    • /
    • 2010
  • Software for GafChromic EBT2 film dosimetry was developed in this study. The software provides film calibration functions based on color channels, which are categorized depending on the colors red, green, blue, and gray. Evaluations of the correction effects for light scattering of a flat-bed scanner and thickness differences of the active layer are available. Dosimetric results from EBT2 films can be compared with those from the treatment planning system ECLIPSE or the two-dimensional ionization chamber array MatriXX. Dose verification using EBT2 films is implemented by carrying out the following procedures: file import, noise filtering, background correction and active layer correction, dose calculation, and evaluation. The relative and absolute background corrections are selectively applied. The calibration results and fitting equation for the sensitometric curve are exported to files. After two different types of dose matrixes are aligned through the interpolation of spatial pixel spacing, interactive translation, and rotation, profiles and isodose curves are compared. In addition, the gamma index and gamma histogram are analyzed according to the determined criteria of distance-to-agreement and dose difference. The performance evaluations were achieved by dose verification in the $60^{\circ}$-enhanced dynamic wedged field and intensity-modulated (IM) beams for prostate cancer. All pass ratios for the two types of tests showed more than 99% in the evaluation, and a gamma histogram with 3 mm and 3% criteria was used. The software was developed for use in routine periodic quality assurance and complex IM beam verification. It can also be used as a dedicated radiochromic film software tool for analyzing dose distribution.

A Study on Features Analysis for Retrieving Image Containing Personal Information on the Web (인터넷상에서 개인식별정보가 포함된 영상 검색을 위한 특징정보 분석에 관한 연구)

  • Kim, Jong-Bae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.3
    • /
    • pp.91-101
    • /
    • 2011
  • Internet is becoming increasingly popular due to the rapid development of information and communication technology. There has been a convenient social activities such as the mutual exchange of information, e-commerce, internet banking, etc. through cyberspace on a computer. However, by using the convenience of the internet, the personal IDs(identity card, driving license, passport, student ID, etc.) represented by the electronic media are exposed on the internet frequently. Therefore, this study propose a feature extraction method to analyze the characteristics of image files containing personal information and a image retrieval method to find the images using the extracted features. The proposed method selects the feature information from color, texture, and shape of the images, and the images as searched by similarity analysis between feature information. The result which it experiments from the image which it acquires from the web-based image DB and correct image retrieval rate is 89%, the computing time per frame is 0.17 seconds. The proposed method can be efficiently apply a system to search the image files containing personal information and to determine the criteria of exposure of personal information.

Less Informative Region Extraction for Automatically Advertisement Insertion in Sports Image (스포츠 영상 내 자동적인 광고 삽입을 위한 저정보영역 추출)

  • Jung, Jae-Young;Kim, Young-Kab
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.615-622
    • /
    • 2015
  • Recently virtual advertising is located in an important area of interest in the TV market by convenience of application and reduction of cost. The methods of inserting a virtual advertising in broadcasting are Up-link that method insert the image through the production equipment of the broadcasting station and dispatch equipment and technical personnel in the shooting and Down-streaming that method insert a virtual image automatically in relay video using image processing technology. In recent years, the image processing technology is an important research area in the virtual advertising area for automatically insertion of advertising images. In this paper, we propose the method to extract less-informative region in sports video using image processing. The proposed method extracts less-Informative region through rectangle detection of Hough transform and analysis of color histogram distribution.

Medical Image Automatic Annotation Using Multi-class SVM and Annotation Code Array (다중 클래스 SVM과 주석 코드 배열을 이용한 의료 영상 자동 주석 생성)

  • Park, Ki-Hee;Ko, Byoung-Chul;Nam, Jae-Yeal
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.281-288
    • /
    • 2009
  • This paper proposes a novel algorithm for the efficient classification and annotation of medical images, especially X-ray images. Since X-ray images have a bright foreground against a dark background, we need to extract the different visual descriptors compare with general nature images. In this paper, a Color Structure Descriptor (CSD) based on Harris Corner Detector is only extracted from salient points, and an Edge Histogram Descriptor (EHD) used for a textual feature of image. These two feature vectors are then applied to a multi-class Support Vector Machine (SVM), respectively, to classify images into one of 20 categories. Finally, an image has the Annotation Code Array based on the pre-defined hierarchical relations of categories and priority code order, which is given the several optimal keywords by the Annotation Code Array. Our experiments show that our annotation results have better annotation performance when compared to other method.

Partial Image Retrieval Using an Efficient Pruning Method (효율적인 Pruning 기법을 이용한 부분 영상 검색)

  • 오석진;오상욱;김정림;문영식;설상훈
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.145-152
    • /
    • 2002
  • As the number of digital images available to users is exponentially growing due to the rapid development of digital technology, content-based image retrieval (CBIR) has been one of the most active research areas. A variety of image retrieval methods have been proposed, where, given an input query image, the images that are similar to the input are retrieved from an image database based on low-level features such as colors and textures. However, most of the existing retrieval methods did not consider the case when an input query image is a part of a whole image in the database due to the high complexity involved in partial matching. In this paper, we present an efficient method for partial image matching by using the histogram distribution relationships between query image and whole image. The proposed approach consists of two steps: the first step prunes the search space and the second step performs block-based retrieval using partial image matching to rank images in candidate set. The experimental results demonstrate the feasibility of the proposed algorithm after assuming that the response tune of the system is very high while retrieving only by using partial image matching without Pruning the search space.