• Title/Summary/Keyword: color images

Search Result 2,708, Processing Time 0.033 seconds

Improved k-means Color Quantization based on Octree

  • Park, Hyun Jun;Kim, Kwang Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.9-14
    • /
    • 2015
  • In this paper, we present an color quantization method by complementing the disadvantage of K-means color quantization that is one of the well-known color quantization. We named the proposed method "octree-means" color quantization. K-means color quantization does not use all of the clusters because it initializes the centroid of clusters with random value. The proposed method complements this disadvantage by using the octree color quantization which is fast and uses the distribution of colors in image. We compare the proposed method to six well-known color quantization methods on ten test images to evaluate the performance. The experimental results show 68.29 percent of mean square error(MSE) and processing time increased by 14.34 percent compared with K-means color quantization. Therefore, the proposed method improved the K-means color quantization and perform an effective color quantization.

Color Pattern Recognition with Recombined Single Input Channel Joint Transform Correlator

  • Jeong, Man-Ho
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.140-145
    • /
    • 2011
  • Joint transform correlator (JTC) is a well known tool for color pattern recognition for a color image. Color images have red, green and blue components, thus in conventional JTC, three input channels of these color components are necessary for color pattern recognition. This paper proposes a new technique of color pattern recognition by decomposing the color image into three color components and recombining those components into a single gray image in the input plane. This new technique needs single input channel and single output CCD camera, thus a simple JTC can be used. We present various kinds of simulated results to show that our newly proposed technique can accurately recognize and discriminate color differences.

A Novel Color Conversion Method for Color Vision Deficiency using Color Segmentation (색각 이상자들을 위한 컬러 영역 분할 기반 색 변환 기법)

  • Han, Dong-Il;Park, Jin-San;Choi, Jong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.37-44
    • /
    • 2011
  • This paper proposes a confusion-line separating algorithm in a CIE Lab color space using color segmentation for protanopia and deuteranopia. Images are segmented into regions by grouping adjacent pixels with similar color information using the hue components of the images. To this end, the region growing method and the seed points used in this method are the pixels that correspond to peak points in hue histograms that went through a low pass filter. In order to establish a color vision deficiency (CVD) confusion line map, we established 512 virtual boxes in an RGB 3-D space so that boxes existing on the same confusion line can be easily identified. After that, we checked if segmented regions existed on the same confusion line and then performed color adjustment in an CIE Lab color space so that all adjacent regions exist on different confusion lines in order to provide the best color identification effect to people with CVDs.

Re-coloring Methods using the HSV Color Space for people with the Red-green Color Vision Deficiency (적록 색각 이상자를 위한 HSV색공간을 이용한 색변환 기법)

  • Kim, Hyun-Ji;Cho, Jae-Young;Ko, Sung-Jea
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.91-101
    • /
    • 2013
  • This paper proposes a new re-coloring method for the people with the red-green color vision deficiency (CVD). These people have difficulty in discriminating the red and green colors since they abnormally perceive the hue and luminance value of the colors. We introduce a color transformation that adjusts the hue and luminance value in HSV color space. The color transformation is determined according to the severity of CVD. Our aim is to maintain the color differences in original image while maintaining the recolored image to be natural to the people with normal color vision. Experimental results show that the proposed method can yield more comprehensible images for the people with red-green CVD while maintaining the naturalness of the recolored images.

Analyses of the Effect of Inserting Border Lines between Adjacent Color Regions on Detecting Boundaries (경계선 검출에 대한 인접 칼라 영역간 테두리 선 삽입 효과의 분석)

  • Yoo, Hyeon-Joong;Kim, Woo-Sung;Jang, Young-Beom
    • Journal of IKEEE
    • /
    • v.10 no.1 s.18
    • /
    • pp.87-95
    • /
    • 2006
  • This paper presents the analyses of the effect of inserting border lines between different color regions on edge detection in color codes, and is not intended to present any new algorithm for color-code recognition. With its role to complement the RFID (radio frequency identification) and the wide and fast spread of digital cameras, an interest on color codes is fast increasing. However, the severe distortion of colors in obtained images prohibits color codes from expanding their applications. To reduce the effect of color distortion it is desirable to process the whole pixels in each color region statistically, instead of relying on some pixels sampled from the region. This requires segmentation, and the segmentation usually requires edge detection. To help detect edges not disconnected, we inserted border lines of the width of two pixels between adjacent color regions. Two colors were used for the border lines: one consisting of white pixels, and the other black pixels. The edge detection was performed on images with either of the two kinds of border lines inserted, and the results were compared to results without inserted border lines. We found that inserting black border lines degraded edge detection by causing zipper effect while inserting white border lines improved it compared to the cases without inserted border lines.

  • PDF

Quality grading of Hanwoo (Korean native cattle breed) sub-images using convolutional neural network

  • Kwon, Kyung-Do;Lee, Ahyeong;Lim, Jongkuk;Cho, Soohyun;Lee, Wanghee;Cho, Byoung-Kwan;Seo, Youngwook
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.1109-1122
    • /
    • 2020
  • The aim of this study was to develop a marbling classification and prediction model using small parts of sirloin images based on a deep learning algorithm, namely, a convolutional neural network (CNN). Samples were purchased from a commercial slaughterhouse in Korea, images for each grade were acquired, and the total images (n = 500) were assigned according to their grade number: 1++, 1+, 1, and both 2 & 3. The image acquisition system consists of a DSLR camera with a polarization filter to remove diffusive reflectance and two light sources (55 W). To correct the distorted original images, a radial correction algorithm was implemented. Color images of sirloins of Hanwoo (mixed with feeder cattle, steer, and calf) were divided and sub-images with image sizes of 161 × 161 were made to train the marbling prediction model. In this study, the convolutional neural network (CNN) has four convolution layers and yields prediction results in accordance with marbling grades (1++, 1+, 1, and 2&3). Every single layer uses a rectified linear unit (ReLU) function as an activation function and max-pooling is used for extracting the edge between fat and muscle and reducing the variance of the data. Prediction accuracy was measured using an accuracy and kappa coefficient from a confusion matrix. We summed the prediction of sub-images and determined the total average prediction accuracy. Training accuracy was 100% and the test accuracy was 86%, indicating comparably good performance using the CNN. This study provides classification potential for predicting the marbling grade using color images and a convolutional neural network algorithm.

Side-View Fan Detection Using Both the Location of Nose and Chin and the Color of Image (코와 턱의 위치 및 색상을 이용한 측면 얼굴 검출)

  • 송영준;장언동;박원배;서형석
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.4
    • /
    • pp.17-22
    • /
    • 2003
  • In this paper, we propose the new side-view face detection method in color images which contain faces over one. It uses color and the geometrical distance between nose and chin. We convert RGB to YCbCr color space. We extract candidate regions of face using skin color information from image. And then, the extracted regions are processed by morphological filter, and the processed regions are labeled. Also, we correct the gradient of inclined face image using projected character of nose. And we detect the inclined side-view faces that have right and left 45 tips by within via ordinate. And we get 92% detection rate in 100 test images.

  • PDF

A Basic Study on the Conversion of Sound into Color Image using both Pitch and Energy

  • Kim, Sung-Ill
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.101-107
    • /
    • 2012
  • This study describes a proposed method of converting an input sound signal into a color image by emulating human synesthetic skills which make it possible to associate an sound source with a specific color image. As a first step of sound-to-image conversion, features such as fundamental frequency(F0) and energy are extracted from an input sound source. Then, a musical scale and an octave can be calculated from F0 signals, so that scale, energy and octave can be converted into three elements of HSI model such hue, saturation and intensity, respectively. Finally, a color image with the BMP file format is created as an output of the process of the HSI-to-RGB conversion. We built a basic system on the basis of the proposed method using a standard C-programming. The simulation results revealed that output color images with the BMP file format created from input sound sources have diverse hues corresponding to the change of the F0 signals, where the hue elements have different intensities depending on octaves with the minimum frequency of 20Hz. Furthermore, output images also have various levels of chroma(or saturation) which is directly converted from the energy.

Designing New Hanbok Products Using Saekdong -Using with CLO 3D- (색동을 활용한 신한복 제품의 디자인 개발 -CLO 3D 프로그램을 활용하여-)

  • Heeyoung Kim
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.46 no.6
    • /
    • pp.945-962
    • /
    • 2022
  • This study examines the use of traditional patterns by new Hanbok brands. A Saekdong print pattern based on previous research was developed and applied to clothing designs. A total of 488 images of printed products from the seven new Hanbok brands and 219 images from the collections of the National Folk Museum of Korea were analyzed. Traditional patterns accounted for 47.4% of the total printed products of the new Hanbok designs, with the following ratio of use, in descending order: flower patterns, traditional paintings, animals, geometrical designs, Dancheong, text and others, Jogakbo, and Saekdong. Saekdong was found in three brand products, and the color or shape was modified. To develop the Saekdong image, five colors - red, yellow, blue, white, and green - were selected. The ratio of use for each color and the width of each color were determined with reference to previous studies. The average color value was determined through color analysis of the Saekdong collections. A total of seven items were designed for the print pattern, and four items were added for coordination to consist of four styles. This study aims to use the results of this analysis to provide insights into product development using traditional patterns.

Comparison of Visualization Enhancement Techniques for Himawari-8 / AHI-based True Color Image Production (Himawari-8/AHI 기반 True color 영상 생산을 위한 시각화 향상 기법 비교 연구)

  • Han, Hyeon-Gyeong;Lee, Kyeong-Sang;Choi, Sungwon;Seo, Minji;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Kim, Honghee;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.483-489
    • /
    • 2019
  • True color images display colors similar to natural colors. This has the advantage that it is possible to monitor rapidly the complex earth atmosphere phenomenon and the change of the surface type. Currently, various organizations are producing true color images. In Korea, it is necessary to produce true color images by replacing generations with next generation weather satellites. Therefore, in this study, visual enhancement for true color image production was performed using Top of Atmosphere (TOA) data of Advanced Himawari Imager (AHI) sensor mounted on Himawari-8 satellite. In order to improve the visualization, we performed two methods of Nonlinear enhancement and Histogram equalization. As a result, Histogram equalization showed a strong bluish image in the region over $70^{\circ}$ Solar Zenith Angle (SZA) compared to the Nonlinear enhancement and nonlinear enhancement technique showed a reddish vegetation area.