• Title/Summary/Keyword: Reference color

Search Result 440, Processing Time 0.024 seconds

EXTRACTION OF THE LEAN TISSUE BOUNDARY OF A BEEF CARCASS

  • Lee, C. H.;H. Hwang
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11c
    • /
    • pp.715-721
    • /
    • 2000
  • In this research, rule and neuro net based boundary extraction algorithm was developed. Extracting boundary of the interest, lean tissue, is essential for the quality evaluation of the beef based on color machine vision. Major quality features of the beef are size, marveling state of the lean tissue, color of the fat, and thickness of back fat. To evaluate the beef quality, extracting of loin parts from the sectional image of beef rib is crucial and the first step. Since its boundary is not clear and very difficult to trace, neural network model was developed to isolate loin parts from the entire image input. At the stage of training network, normalized color image data was used. Model reference of boundary was determined by binary feature extraction algorithm using R(red) channel. And 100 sub-images(selected from maximum extended boundary rectangle 11${\times}$11 masks) were used as training data set. Each mask has information on the curvature of boundary. The basic rule in boundary extraction is the adaptation of the known curvature of the boundary. The structured model reference and neural net based boundary extraction algorithm was developed and implemented to the beef image and results were analyzed.

  • PDF

GPGPU based Depth Image Enhancement Algorithm (GPGPU 기반의 깊이 영상 화질 개선 기법)

  • Han, Jae-Young;Ko, Jin-Woong;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2927-2936
    • /
    • 2013
  • In this paper, we propose a noise reduction and hole removal algorithm in order to improve the quality of depth images when they are used for creating 3D contents. In the proposed algorithm, the depth image and the corresponding color image are both used. First, an intensity image is generated by converting the RGB color space into the HSI color space. By estimating the difference of distance and depth between reference and neighbor pixels from the depth image and difference of intensity values from the color image, they are used to remove noise in the proposed algorithm. Then, the proposed hole filling method fills the detected holes with the difference of euclidean distance and intensity values between reference and neighbor pixels from the color image. Finally, we apply a parallel structure of GPGPU to the proposed algorithm to speed-up its processing time for real-time applications. The experimental results show that the proposed algorithm performs better than other conventional algorithms. Especially, the proposed algorithm is more effective in reducing edge blurring effect and removing noise and holes.

Adaptive White Point Extraction based on Dark Channel Prior for Automatic White Balance

  • Jo, Jieun;Im, Jaehyun;Jang, Jinbeum;Yoo, Yoonjong;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.6
    • /
    • pp.383-389
    • /
    • 2016
  • This paper presents a novel automatic white balance (AWB) algorithm for consumer imaging devices. While existing AWB methods require reference white patches to correct color, the proposed method performs the AWB function using only an input image in two steps: i) white point detection, and ii) color constancy gain computation. Based on the dark channel prior assumption, a white point or region can be accurately extracted, because the intensity of a sufficiently bright achromatic region is higher than that of other regions in all color channels. In order to finally correct the color, the proposed method computes color constancy gain values based on the Y component in the XYZ color space. Experimental results show that the proposed method gives better color-corrected images than recent existing methods. Moreover, the proposed method is suitable for real-time implementation, since it does not need a frame memory for iterative optimization. As a result, it can be applied to various consumer imaging devices, including mobile phone cameras, compact digital cameras, and computational cameras with coded color.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Reference White Estimation and Color Temperature Decision Using Reference White Region Extraction (기준 백색 영역 추출을 이용한 기준 백색 추정 및 색온도 결정)

  • 도현철;진성일
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05c
    • /
    • pp.295-298
    • /
    • 2002
  • 본 논문에서는 한 장의 칼라 영상을 형성시키는 광원의 색온도를 추정하는 새로운 방법을 제안한다. 주어진 한 장의 칼라 영상으로부터 광원의 색도 좌표를 계산하는데 필요한 R,G,B 값이 특정한 칼라에 편향되지 않는 기준 백색 영역을 추출한다. 추출된 기준 백색 영역 내에서 계산된 (x,y) 색도 좌표로부터 등 색온도선을 이용하여 최종적으로 주어진 칼라 영상을 형성시키는 광원의 색온도를 추정한다. 캐나다 Simon Fraser 대학에서 제공되는 205장의 영상을 이용하여 제안된 방법과 기존 방법들을 비교 실험한 결과로부터 제안된 방법으로 추정한 색온도가 상대적으로 작은 오차를 나타냄을 확인하였다.

  • PDF

A Study on Area Color of Gwangbok-ro Based on the Analysis of the Colors of the Facade Designs of Stores Along the Road (광복로 로드숍 파사드디자인의 색채분석을 통한 지역색 연구)

  • Yeo, Mi;Lee, Chang-No
    • Korean Institute of Interior Design Journal
    • /
    • v.22 no.1
    • /
    • pp.247-255
    • /
    • 2013
  • In this study, the colors and characteristics of Gwangbok-ro of Busan were analyzed in the standpoint of local images based on the examination of the facade designs of stores along the road of Gwangbok-ro, Busan a main street with massive population flow. To that end, the facades of stores, correlation with the city, color and locality were examined, and after the status of facade designs in Gwangbok-ro were identified through case survey by it, color images were analyzed. For color analysis, Munsell color system was used as basic tool. As a result of examining the colors in Gwangbok-ro area, the following status could be analyzed on 3 attributes of hue, brightness and chroma: First, analysis results of hue indicated that dominant color that covers 70% or more of the area represented mid brightness and low chroma in GY(36.1%) series, subsidiary color which covers 25% or more of the area mid brightness and low chroma in YR(26.5%) series, and accent color that covers less than 5% of the area high brightness and low chroma of GY(40%) series. Second, in brightness analysis, dominant color mostly represented mid brightness, subsidiary color mid brightness and accent color high brightness respectively. In particular accent color showed more intensive crowding phenomenon in high brightness. Third, as for chroma, dominant color, subsidiary color and accent color all are gathered in low chroma, however in small number of accent colors, peculiar high chroma appeared notable. In conclusion, the colors of Gwangbok-ro area analyzed based on the facade design of the stores along the road in this study were superficial colors that reflect the life of people in the area, artificial colors by improvement of the local environment. This study is meaningful in that the image of Gwangbok-ro was found through building colors in one part of the city Busan. It is judged that the study results would become useful as reference document in planning out environment colors later on.

Color comparison between non-vital and vital teeth

  • Greta, Delia Cristina;Colosi, Horatiu Alexandru;Gasparik, Cristina;Dudea, Diana
    • The Journal of Advanced Prosthodontics
    • /
    • v.10 no.3
    • /
    • pp.218-226
    • /
    • 2018
  • PURPOSE. The aim of this study was to define a color space of non-vital teeth and to compare it with the color space of matched vital teeth, recorded in the same patients. MATERIALS AND METHODS. In a group of 218 patients, with the age range from 17 to 70, the middle third of the buccal surface of 359 devitalized teeth was measured using a clinical spectrophotometer (Vita Easyshade Advance). Lightness ($L^*$), chromatic parameters ($a^*$, $b^*$), chroma ($C^*$), hue angle (h) and the closest Vita shade in Classical and 3D Master codifications were recorded. For each patient, the same data were recorded in a vital reference tooth. The measurements were performed by the same operator with the same spectrophotometer, using a standardized protocol for color evaluation. RESULTS. The color coordinates of non-vital teeth varied as follows: lightness $L^*$: 52.83-92.93, $C^*$: 8.23-58.90, h: 51.20-101.53, $a^*$: -2.53-24.80, $b^*$: 8.10-53.43. For the reference vital teeth, the ranges of color parameters were: $L^*$: 60.90-97.16, $C^*$: 8.43-39.23, h: 75.30-101.13, $a^*$: -2.36-9.60, $b^*$: 8.36-39.23. The color differences between vital and non-vital teeth depended on tooth group, but not on patient age. CONCLUSION. Non-vital teeth had a wider color space than vital ones. Non-vital teeth were darker (decreased lightness), more saturated (increased chroma), and with an increased range of the hue interval. An increased tendency towards positive values on the $a^*$ and $b^*$ axes suggested redder and yellower non-vital teeth compared to vital ones.

Implementation of Muscular Sense into both Color and Sound Conversion System based on Wearable Device (웨어러블 디바이스 기반 근감각-색·음 변환 시스템의 구현)

  • Bae, Myungjin;Kim, Sungill
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.3
    • /
    • pp.642-649
    • /
    • 2016
  • This paper presents a method for conversion of muscular sense into both visual and auditory senses based on synesthetic perception. Muscular sense can be defined by rotation angles, direction changes and motion degrees of human body. Synesthetic interconversion can be made by learning, so that it can be possible to create intentional synesthetic phenomena. In this paper, the muscular sense was converted into both color and sound signals which comprise the great majority of synesthetic phenomena. The measurement of muscular sense was performed by using the AHRS(attitude heading reference system). Roll, yaw and pitch signals of the AHRS were converted into three basic elements of color as well as sound, respectively. The proposed method was finally applied to a wearable device, Samsung gear S, successfully.

Object Color Identification Embedded System Realization for Uninhabited Stock Management (무인물류관리시스템을 위한 물체컬러식별 임베디드시스템 구현)

  • Lar, Ki-Kong;Ryu, Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.289-292
    • /
    • 2007
  • An object color identification and classification embedded system realization for uninhabited stock management is presented in this paper. The embedded system is realized by using ultrasonic sensor to extract the object and distance, and detecting binary image from USB CCD camera. The algorithm is identified by comparing the reference pattern with the color pattern of input image, and move to the settled rack at the store. The experimental result leads to use the uninhibited stock management with practice as a robot.

  • PDF

Image and Display Quality Evaluation

  • Ha, Yeong-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1224-1227
    • /
    • 2009
  • When evaluating the quality of images and displays, it is important to combine the characteristics as perceived by the human visual system and measured by equipment using subjective and objective methods, respectively. In the case of objective methods, the quality of a display is measured using colorimetric or radiometric devices according to existing standards covering the color temperature, gamut size, gamma characteristic, and device characterization. Meanwhile, subjective methods assess the quality of an image using the human visual system based on a comparison with a reference or counterpart using such metrics as the sharpness, noise, contrast, saturation, and color accuracy. Objective and subjective methods are usually used together in comparison, as ultimately it is observers watching images on a display. In addition to existing objective methods, a new image quality metric is also introduced as regards the JPEG compression ratio that is reflected in the relationship between the gamut size and the color fidelity in CIELAB color space.

  • PDF