• Title/Summary/Keyword: color images

Search Result 2,715, Processing Time 0.026 seconds

Design of discriminant function for thick and thin coating from the white coating (백태 중 후태 및 박태 분류 판별함수 설계)

  • Choi, Eun-Ji;Kim, Keun-Ho;Ryu, Hyun-Hee;Lee, Hae-Jung;Kim, Jong-Yeol
    • Korean Journal of Oriental Medicine
    • /
    • v.13 no.3
    • /
    • pp.119-124
    • /
    • 2007
  • Introduction: In Oriental medicine, the status of tongue is the important indicator to diagnose one's health, because it represents physiological and clinicopathological changes of inner parts of the body. The method of tongue diagnosis is not only convenient but also non-invasive, so tongue diagnosis is most widely used in Oriental medicine. By the way, since tongue diagnosis is affected by examination circumstances a lot, its performance depends on a light source, degrees of an angle, a medical doctor's condition etc. Therefore, it is not easy to make an objective and standardized tongue diagnosis. In order to solve this problem, in this study, we tried to design a discriminant function for thick and thin coating with color vectors of preprocessed image. Method: 52 subjects, who were diagnosed as white-coated tongue, were involved. Among them, 45 subjects diagnosed as thin coating and 7 subjects diagnosed as thick coating by oriental medical doctors, and then their tongue images were obtained from a digital tongue diagnosis system. Using those acquired tongue images, we implemented two steps: Preprocessing and image analyzing. The preprocessing part of this method includes histogram equalization and histogram stretching at each color component, especially, intensity and saturation. It makes the difference between tongue substance and tongue coating was more visible, so that we can separate tongue coating easily. Next part, we analyzed the characteristic of color values and found the threshold to divide tongue area into coating area. Then, from tongue coating image, it is possible to extract the variables that were important to classify thick and thin coating. Result : By statistical analysis, two significant vectors, associated with G, were found, which were able to describe the difference between thick and thin coating very well. Using these two variables, we designed the discriminant function for coating classification and examined its performance. As a result, the overall accuracy of thick and thin coating classification was 92.3%. Discussion : From the result, we can expect that the discriminant function is applicable to other coatings in a similar way. Also, it can be used to make an objective and standardized diagnosis.

  • PDF

Multi Scale Tone Mapping Model Using Visual Brightness Functions for HDR Image Compression (HDR 영상 압축을 위한 시각 밝기 함수를 이용한 다중 스케일 톤 맵핑 모델)

  • Kwon, Hyuk-Ju;Lee, Sung-Hak;Chae, Seok-Min;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37A no.12
    • /
    • pp.1054-1064
    • /
    • 2012
  • HDR (high dynamic range) tone mapping algorithms are used in image processing that reduces the dynamic range of an image to be displayed on LDR (low dynamic range) devices properly. The retinex is one of the tone mapping algorithms to provide dynamic range compression, color constancy, and color rendition. It has been developed through multi-scale methods and luminance-based methods. Retinex algorithms still have drawbacks such as the emphasized noise and desaturation. In this paper, we propose a multi scale tone mapping algorithm for enhancement of contrast, saturation, and noise of HDR rendered images based on visual brightness functions. In the proposed algorithm, HSV color space has been used for preserving the hue and saturation of images. And the algorithm includes the estimation of minimum and maximum luminance level and a visual gamma function for the variation of viewing conditions. And subjective and objective evaluations show that proposed algorithm is better than existing algorithms. The proposed algorithm is expected to image quality enhancement in some fields that require a improvement of the dynamic range due to the changes in the viewing condition.

A Study on Visual Attention Factors for Advertising Photographs (광고 사진을 위한 시각적 주의 기초요인 연구)

  • Kim, Dae-Wook
    • Journal of Digital Convergence
    • /
    • v.17 no.3
    • /
    • pp.413-425
    • /
    • 2019
  • We see many images every day, some of these images are stored in memory, and the majority are immersed in the unconscious world. Visual elements are seen by personal attention or by visual or biological attention factors. Specific and clear discovery of this visual attention has not yet been made. However, there is an interesting discussion of this visual attention in the fields of interior, design, visual perception, advertising, and psychology. Advertising photographers are expected to produce what their work will have on viewers and consumers. However, the adjustment of subject, exposure, color, or post-production, which could have a visual effect on the consumer, was determined only by the photographer's senses rather than the experimental verification. The advertisement photographs provide a specific image related to the object to be advertised and deliver a certain message. Therefore, it is necessary to understand the effect of the image in a certain visual way. According to previous studies, there are two major factors that affect the visual impression of the viewer. One is the factor depending on the type and content of the subject and the other is the factor about the density and color of the subject. The purpose of this study is to investigate the meaningful changes in the visual perception depending on the shape, content, color and tone of the subject, which can be called the main subject, And to analyze the effects of I will study some implications of visual elements through various analyzes.

The Method of Wet Road Surface Condition Detection With Image Processing at Night (영상처리기반 야간 젖은 노면 판별을 위한 방법론)

  • KIM, Youngmin;BAIK, Namcheol
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.3
    • /
    • pp.284-293
    • /
    • 2015
  • The objective of this paper is to determine the conditions of road surface by utilizing the images collected from closed-circuit television (CCTV) cameras installed on roadside. First, a technique was examined to detect wet surfaces at nighttime. From the literature reviews, it was revealed that image processing using polarization is one of the preferred options. However, it is hard to use the polarization characteristics of road surface images at nighttime because of irregular or no light situations. In this study, we proposes a new discriminant for detecting wet and dry road surfaces using CCTV image data at night. To detect the road surface conditions with night vision, we applied the wavelet packet transform for analyzing road surface textures. Additionally, to apply the luminance feature of night CCTV images, we set the intensity histogram based on HSI(Hue Saturation Intensity) color model. With a set of 200 images taken from the field, we constructed a detection criteria hyperplane with SVM (Support Vector Machine). We conducted field tests to verify the detection ability of the wet road surfaces and obtained reliable results. The outcome of this study is also expected to be used for monitoring road surfaces to improve safety.

A license plate area segmentation algorithm using statistical processing on color and edge information (색상과 에지에 대한 통계 처리를 이용한 번호판 영역 분할 알고리즘)

  • Seok Jung-Chul;Kim Ku-Jin;Baek Nak-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.353-360
    • /
    • 2006
  • This paper presents a robust algorithm for segmenting a vehicle license plate area from a road image. We consider the features of license plates in three aspects : 1) edges due to the characters in the plate, 2) colors in the plate, and 3) geometric properties of the plate. In the preprocessing step, we compute the thresholds based on each feature to decide whether a pixel is inside a plate or not. A statistical approach is applied to the sample images to compute the thresholds. For a given road image, our algorithm binarizes it by using the thresholds. Then, we select three candidate regions to be a plate by searching the binary image with a moving window. The plate area is selected among the candidates with simple heuristics. This algorithm robustly detects the plate against the transformation or the difference of color intensity of the plate in the input image. Moreover, the preprocessing step requires only a small number of sample images for the statistical processing. The experimental results show that the algorithm has 97.8% of successful segmentation of the plate from 228 input images. Our prototype implementation shows average processing time of 0.676 seconds per image for a set of $1280{\times}960$ images, executed on a 3GHz Pentium4 PC with 512M byte memory.

A Research regarding the Figuration Comparison of 3D Printing using the Radiation DICOM Images (방사선 DICOM 영상을 이용한 3차원 프린팅 출력물의 형상 비교에 관한 연구)

  • Kim, Hyeong-Gyun;Choi, Jun-Gu;Kim, Gha-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.2
    • /
    • pp.558-565
    • /
    • 2016
  • Recent 3D printing technology has been grafting onto various medical practices. In light of this trend, this research is intended to examine the figuration surface's accuracy of 3D images made by using DICOM images after printing by 3D printing. The medical images were obtained from animal bone objects, while the objects were printed after undergoing STL file conversion for 3D printing purposes. Ultimately, after the 3D figuration, which was obtained by the original animal bones and 3D printing, was scanned by 3D scanner, 3D modeling was merged each other and the differences were compared. The result analysis was conducted by visual figuration comparison, color comparison of modeling's scale value, and numerical figuration comparison. The shape surface was not visually distinguished; the numerical figuration comparison was made from the values measured from the four different points on the X, Y and Z coordinates. The shape surface of the merged modeling was smaller than the original object (the animal bone) by average of -0.49 mm in the 3D printed figuration. However, not all of the shape surface was uniformly reduced in size and the differences was within range of -0.83 mm on the experiment.

Face recognition rate comparison with distance change using embedded data in stereo images (스테레오 영상에서 임베디드 데이터를 이용한 거리에 따른 얼굴인식률 비교)

  • 박장한;남궁재찬
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.81-89
    • /
    • 2004
  • In this paper, we compare face recognition rate by PCA algorithm using distance change and embedded data being input left side and right side image in stereo images. The proposed method detects face region from RGB color space to YCbCr color space. Also, The extracted face image's scale up/down according to distance change and extracts more robust face region. The proposed method through an experiment could establish standard distance (100cm) in distance about 30∼200cm, and get 99.05% (100cm) as an average recognition result by scale change. The definition of super state is specification region in normalized size (92${\times}$112), and the embedded data extracts the inner factor of defined super state, achieved face recognition through PCA algorithm. The orignal images can receive specification data in limited image's size (92${\times}$112) because embedded data to do learning not that do all learning, in image of 92${\times}$112 size averagely 99.05%, shows face recognition rate of test 1 99.05%, test 2 98.93%, test 3 98.54%, test 4 97.85%. Therefore, the proposed method through an experiment showed that if apply distance change rate could get high recognition rate, and the processing speed improved as well as reduce face information.

A Study on Face Image Recognition Using Feature Vectors (특징벡터를 사용한 얼굴 영상 인식 연구)

  • Kim Jin-Sook;Kang Jin-Sook;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.897-904
    • /
    • 2005
  • Face Recognition has been an active research area because it is not difficult to acquire face image data and it is applicable in wide range area in real world. Due to the high dimensionality of a face image space, however, it is not easy to process the face images. In this paper, we propose a method to reduce the dimension of the facial data and extract the features from them. It will be solved using the method which extracts the features from holistic face images. The proposed algorithm consists of two parts. The first is the using of principal component analysis (PCA) to transform three dimensional color facial images to one dimensional gray facial images. The second is integrated linear discriminant analusis (PCA+LDA) to prevent the loss of informations in case of performing separated steps. Integrated LDA is integrated algorithm of PCA for reduction of dimension and LDA for discrimination of facial vectors. First, in case of transformation from color image to gray image, PCA(Principal Component Analysis) is performed to enhance the image contrast to raise the recognition rate. Second, integrated LDA(Linear Discriminant Analysis) combines the two steps, namely PCA for dimensionality reduction and LDA for discrimination. It makes possible to describe concise algorithm expression and to prevent the information loss in separate steps. To validate the proposed method, the algorithm is implemented and tested on well controlled face databases.

Object/Non-object Image Classification Based on the Detection of Objects of Interest (관심 객체 검출에 기반한 객체 및 비객체 영상 분류 기법)

  • Kim Sung-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.25-33
    • /
    • 2006
  • We propose a method that automatically classifies the images into the object and non-object images. An object image is the image with object(s). An object in an image is defined as a set of regions that lie around center of the image and have significant color distribution against the other surround (or background) regions. We define four measures based on the characteristics of an object to classify the images. The center significance is calculated from the difference in color distribution between the center area and its surrounding region. Second measure is the variance of significantly correlated colors in the image plane. Significantly correlated colors are first defined as the colors of two adjacent pixels that appear more frequently around center of an image rather than at the background of the image. Third one is edge strength at the boundary of candidate for the object. By the way, it is computationally expensive to extract third value because central objects are extracted. So, we define fourth measure which is similar with third measure in characteristic. Fourth one can be calculated more fast but show less accuracy than third one. To classify the images we combine each measure by training the neural network and SYM. We compare classification accuracies of these two classifiers.

  • PDF

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.