• Title/Summary/Keyword: color images

Search Result 2,708, Processing Time 0.027 seconds

A Research regarding the Figuration Comparison of 3D Printing using the Radiation DICOM Images (방사선 DICOM 영상을 이용한 3차원 프린팅 출력물의 형상 비교에 관한 연구)

  • Kim, Hyeong-Gyun;Choi, Jun-Gu;Kim, Gha-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.2
    • /
    • pp.558-565
    • /
    • 2016
  • Recent 3D printing technology has been grafting onto various medical practices. In light of this trend, this research is intended to examine the figuration surface's accuracy of 3D images made by using DICOM images after printing by 3D printing. The medical images were obtained from animal bone objects, while the objects were printed after undergoing STL file conversion for 3D printing purposes. Ultimately, after the 3D figuration, which was obtained by the original animal bones and 3D printing, was scanned by 3D scanner, 3D modeling was merged each other and the differences were compared. The result analysis was conducted by visual figuration comparison, color comparison of modeling's scale value, and numerical figuration comparison. The shape surface was not visually distinguished; the numerical figuration comparison was made from the values measured from the four different points on the X, Y and Z coordinates. The shape surface of the merged modeling was smaller than the original object (the animal bone) by average of -0.49 mm in the 3D printed figuration. However, not all of the shape surface was uniformly reduced in size and the differences was within range of -0.83 mm on the experiment.

Face recognition rate comparison with distance change using embedded data in stereo images (스테레오 영상에서 임베디드 데이터를 이용한 거리에 따른 얼굴인식률 비교)

  • 박장한;남궁재찬
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.81-89
    • /
    • 2004
  • In this paper, we compare face recognition rate by PCA algorithm using distance change and embedded data being input left side and right side image in stereo images. The proposed method detects face region from RGB color space to YCbCr color space. Also, The extracted face image's scale up/down according to distance change and extracts more robust face region. The proposed method through an experiment could establish standard distance (100cm) in distance about 30∼200cm, and get 99.05% (100cm) as an average recognition result by scale change. The definition of super state is specification region in normalized size (92${\times}$112), and the embedded data extracts the inner factor of defined super state, achieved face recognition through PCA algorithm. The orignal images can receive specification data in limited image's size (92${\times}$112) because embedded data to do learning not that do all learning, in image of 92${\times}$112 size averagely 99.05%, shows face recognition rate of test 1 99.05%, test 2 98.93%, test 3 98.54%, test 4 97.85%. Therefore, the proposed method through an experiment showed that if apply distance change rate could get high recognition rate, and the processing speed improved as well as reduce face information.

A Study on Face Image Recognition Using Feature Vectors (특징벡터를 사용한 얼굴 영상 인식 연구)

  • Kim Jin-Sook;Kang Jin-Sook;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.897-904
    • /
    • 2005
  • Face Recognition has been an active research area because it is not difficult to acquire face image data and it is applicable in wide range area in real world. Due to the high dimensionality of a face image space, however, it is not easy to process the face images. In this paper, we propose a method to reduce the dimension of the facial data and extract the features from them. It will be solved using the method which extracts the features from holistic face images. The proposed algorithm consists of two parts. The first is the using of principal component analysis (PCA) to transform three dimensional color facial images to one dimensional gray facial images. The second is integrated linear discriminant analusis (PCA+LDA) to prevent the loss of informations in case of performing separated steps. Integrated LDA is integrated algorithm of PCA for reduction of dimension and LDA for discrimination of facial vectors. First, in case of transformation from color image to gray image, PCA(Principal Component Analysis) is performed to enhance the image contrast to raise the recognition rate. Second, integrated LDA(Linear Discriminant Analysis) combines the two steps, namely PCA for dimensionality reduction and LDA for discrimination. It makes possible to describe concise algorithm expression and to prevent the information loss in separate steps. To validate the proposed method, the algorithm is implemented and tested on well controlled face databases.

Object/Non-object Image Classification Based on the Detection of Objects of Interest (관심 객체 검출에 기반한 객체 및 비객체 영상 분류 기법)

  • Kim Sung-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.25-33
    • /
    • 2006
  • We propose a method that automatically classifies the images into the object and non-object images. An object image is the image with object(s). An object in an image is defined as a set of regions that lie around center of the image and have significant color distribution against the other surround (or background) regions. We define four measures based on the characteristics of an object to classify the images. The center significance is calculated from the difference in color distribution between the center area and its surrounding region. Second measure is the variance of significantly correlated colors in the image plane. Significantly correlated colors are first defined as the colors of two adjacent pixels that appear more frequently around center of an image rather than at the background of the image. Third one is edge strength at the boundary of candidate for the object. By the way, it is computationally expensive to extract third value because central objects are extracted. So, we define fourth measure which is similar with third measure in characteristic. Fourth one can be calculated more fast but show less accuracy than third one. To classify the images we combine each measure by training the neural network and SYM. We compare classification accuracies of these two classifiers.

  • PDF

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.

EVALUATING THE RELIABILITY AND REPEATABILITY OF THE DIGITAL COLOR ANALYSIS SYSTEM FOR DENTISTRY (치과용 디지털 색상 분석용 기기의 정확성과 재현 능력에 대한 평가)

  • Jeong, Joong-Jae;Park, Su-Jung;Cho, Hyun-Gu;Hwang, Yun-Chan;Oh, Won-Mann;Hwang, In-Nam
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.4
    • /
    • pp.352-368
    • /
    • 2008
  • This study was done to evaluate the reliability of the digital color analysis system (ShadeScan, CYNOVAD, Montreal. Canada) for dentistry. Sixteen tooth models were made by injecting the A2 shade chemical cured resin for temporary crown into the impression acquired from 16 adults. Surfaces of the model teeth were polished with resin polishing cloth. The window of the ShadeScan handpiece was placed on the labial surface of tooth and tooth images were captured, and each tooth shade was analyzed with the ShadeScan software. Captured images were selected in groups, and compared one another. Two models were selected to evaluate repeatability of ShadeScan, and shade analysis was performed 10 times for each tooth. And, to ascertain the color difference of same shade code analyzed by ShadeScan, CIE $L^*a^*b^*$values of shade guide of Gradia Direct (GC, Tokyo, Japan) were measured on the white and black background using the Spectrolino (GretagMacbeth, USA), and Shade map of each shade guide was captured using the ShadeScan. There were no teeth that were analyzed as A2 shade and unique shade. And shade mapping analyses of the same tooth revealed similar shade and distribution except incisal third. Color difference (${\Delta}E^*$) among the Shade map which analyzed as same shade by ShadeScan were above 3. Within the limits of this study, digital color analysis instrument for dentistry has relatively high repeatability, but has controversial in accuracy.

Image Retrieval Using Multiresoluton Color and Texture Features in Wavelet Transform Domain (웨이브릿 변환 영역의 칼라 및 질감 특징을 이용한 영상검색)

  • Chun Young-Deok;Sung Joong-Ki;Kim Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.55-66
    • /
    • 2006
  • We propose a progressive image retrieval method based on an efficient combination of multiresolution color and torture features in wavelet transform domain. As a color feature, color autocorrelogram of the hue and saturation components is chosen. As texture features, BDIP and BVLC moments of the value component are chosen. For the selected features, we obtain multiresolution feature vectors which are extracted from all decomposition levels in wavelet domain. The multiresolution feature vectors of the color and texture features are efficiently combined by the normalization depending on their dimensions and standard deviation vector, respectively, vector components of the features are efficiently quantized in consideration of their storage space, and computational complexity in similarity computation is reduced by using progressive retrieval strategy. Experimental results show that the proposed method yields average $15\%$ better performance in precision vs. recall and average 0.2 in ANMRR than the methods using color histogram color autocorrelogram SCD, CSD, wavelet moments, EHD, BDIP and BVLC moments, and combination of color histogram and wavelet moments, respectively. Specially, the proposed method shows an excellent performance over the other methods in image DBs contained images of various resolutions.

Low Resolution Depth Interpolation using High Resolution Color Image (고해상도 색상 영상을 이용한 저해상도 깊이 영상 보간법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.60-65
    • /
    • 2013
  • In this paper, we propose a high-resolution disparity map generation method using a low-resolution time-of-flight (TOF) depth camera and color camera. The TOF depth camera is efficient since it measures the range information of objects using the infra-red (IR) signal in real-time. It also quantizes the range information and provides the depth image. However, there are some problems of the TOF depth camera, such as noise and lens distortion. Moreover, the output resolution of the TOF depth camera is too small for 3D applications. Therefore, it is essential to not only reduce the noise and distortion but also enlarge the output resolution of the TOF depth image. Our proposed method generates a depth map for a color image using the TOF camera and the color camera simultaneously. We warp the depth value at each pixel to the color image position. The color image is segmented using the mean-shift segmentation method. We define a cost function that consists of color values and segmented color values. We apply a weighted average filter whose weighting factor is defined by the random walk probability using the defined cost function of the block. Experimental results show that the proposed method generates the depth map efficiently and we can reconstruct good virtual view images.

  • PDF

Image Generator Design for OLED Panel Test (OLED 패널 테스트를 위한 영상 발생기 설계)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In this paper, we propose an image generator for OLED panel test that can compensate for color coordinates and luminance by using panel defect inspection and optical measurement while displaying images on OLED panel. The proposed image generator consists of two processes: the image generation process and the process of compensating color coordinates and luminance using optical measurement. In the image generating process, the panel is set to receive the panel information to drive the panel, and the image is output by adjusting the output setting of the image generator according to the panel information. The output form of the image is configured by digital RGB method. The pattern generation algorithm inside the image generator outputs color and gray image data by transmitting color data to a 24-bit data line based on a synchronization signal according to the resolution of the panel. The process of compensating color coordinates and luminance using optical measurement outputs an image to an OLED panel in an image generator, and compensates for a portion where color coordinates and luminance data measured by an optical module differ from reference data. To evaluate the accuracy of the image generator for the OLED panel test proposed in this paper, Xilinx's Spartan 6 series XC6SLX25-FG484 FPGA was used and the design tool was ISE 14.5. The output of the image generation process was confirmed that the target setting value and the simulation result value for the digital RGB output using the oscilloscope matched. Compensating the color coordinates and luminance using optical measurements showed accuracy within the error rate suggested by the panel manufacturer.