• Title/Summary/Keyword: Color Image Data Processing

Search Result 269, Processing Time 0.029 seconds

Automatic Denoising of 2D Color Face Images Using Recursive PCA Reconstruction (2차원 칼라 얼굴 영상에서 반복적인 PCA 재구성을 이용한 자동적인 잡음 제거)

  • Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.63-71
    • /
    • 2006
  • Denoising and reconstruction of color images are extensively studied in the field of computer vision and image processing. Especially, denoising and reconstruction of color face images are more difficult than those of natural images because of the structural characteristics of human faces as well as the subtleties of color interactions. In this paper, we propose a denoising method based on PCA reconstruction for removing complex color noise on human faces, which is not easy to remove by using vectorial color filters. The proposed method is composed of the following five steps: training of canonical eigenface space using PCA, automatic extraction of facial features using active appearance model, relishing of reconstructed color image using bilateral filter, extraction of noise regions using the variance of training data, and reconstruction using partial information of input images (except the noise regions) and blending of the reconstructed image with the original image. Experimental results show that the proposed denoising method maintains the structural characteristics of input faces, while efficiently removing complex color noise.

Hybrid Color and Grayscale Images Encryption Scheme Based on Quaternion Hartley Transform and Logistic Map in Gyrator Domain

  • Li, Jianzhong
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.1
    • /
    • pp.42-54
    • /
    • 2016
  • A hybrid color and grayscale images encryption scheme based on the quaternion Hartley transform (QHT), the two-dimensional (2D) logistic map, the double random phase encoding (DRPE) in gyrator transform (GT) domain and the three-step phase-shifting interferometry (PSI) is presented. First, we propose a new color image processing tool termed as the quaternion Hartley transform, and we develop an efficient method to calculate the QHT of a quaternion matrix. In the presented encryption scheme, the original color and grayscale images are represented by quaternion algebra and processed holistically in a vector manner using QHT. To enhance the security level, a 2D logistic map-based scrambling technique is designed to permute the complex amplitude, which is formed by the components of the QHT-transformed original images. Subsequently, the scrambled data is encoded by the GT-based DRPE system. For the convenience of storage and transmission, the resulting encrypted signal is recorded as the real-valued interferograms using three-step PSI. The parameters of the scrambling method, the GT orders and the two random phase masks form the keys for decryption of the secret images. Simulation results demonstrate that the proposed scheme has high security level and certain robustness against data loss, noise disturbance and some attacks such as chosen plaintext attack.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

An Experimental Investigation of Unsteady Mixed Convection in a Horizontal Channel with Cavity Using Thermo-Sensitive Liquid Crystals

  • Bae, Dae-Seok;Cai, Long-Ji;Kim, Eun-Pil
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.33 no.7
    • /
    • pp.987-993
    • /
    • 2009
  • An experimental study is performed to investigate unsteady mixed convection in a horizontal channel with a heat source. Particle image velocimetry (PIV) with thermo-sensitive liquid crystal (TLC) tracers is used for visualization and analysis. This method allows simultaneous measurement of velocity and temperature fields at a given instant of time. Quantitative data of the temperature and velocity are obtained by applying the color-image processing to a visualized image, and neural network is applied to the color-to-temperature calibration. It is found that the periodic flow of mixed convection in a cavity appears at very low Reynolds numbers (Re<0.4), and the period decreases with increasing Reynolds numbers and increases with increasing aspect ratio.

A Study on the Quantitative Visualization of Rayleigh-Bernard Convection Using Thermochromic Liquid Crystal (감온액정을 이용한 Rayleigh-Bernard 대류의 정량적 가시화에 관한 연구)

  • 배대석;김진만;권오봉;이동형;이연원;김남식
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.27 no.3
    • /
    • pp.395-404
    • /
    • 2003
  • Quantitative data of the temperature and velocity were obtained simultaneously by using liquid crystal tracer. PIV(Particle Image Velocimety) based on a grey-level cross-correlation method was used for visualizing and analysis of the flow field. The temperature gradient was obtained by applying the color-image processing to a visualized image, and a neural-network a1gorithm was applied to the color-to-temperature calibration. This simultaneous measurement was applied to the Rayleigh-Bernard convection. This paper describes the method, and presents the quantitative visualization of Rayleigh-Bernard convection and the effect of aspect ratio and viscosity. Also the experimental results were compared with the numerical results.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

A Proposal for Processor for Improved Utilization of High resolution Satellite Images

  • Choi, Kyeong-Hwan;Kim, Sung-Jae;Jo, Yun-Won;Jo, Myung-Hee
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.211-214
    • /
    • 2007
  • With the recent development of spatial information technology, the relative importance of satellite image contents has increased to about 62%, the techniques related to satellite images have improved, and their demand is gradually increasing. Accordingly, a standard processing method for the whole process of collection from satellites to distribution of satellite images is required in many countries for efficient distribution of images and improvement of their utilization. This study presents the processor standardization technique for the preprocessing of satellite images including geometric correction, orthorectification, color adjustment, interpolation for DEM (Digital Elevation Model) production, rearrangement, and image data management, which will standardize the subjective, complex process and improve their utilization by making it easy for general users to use them

  • PDF

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

A Development of Unicode-based Multi-lingual Namecard Recognizer (Unicode 기반 다국어 명함인식기 개발)

  • Jang, Dong-Hyeub;Lee, Jae-Hong
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.117-122
    • /
    • 2009
  • We developed a multi-lingual namecard recognizer for building up a global client management systems. At first, we created the Unicode-based character image database for character recognition and learning of multi languages, and applied many color image processing techniques to get more correct data for namecard images which were acquired by various input devices. And by applying multi-layer perceptron neural network, individual character recognition applied for language types, and post-processing utilizing keyword databases made for individual languages, we increased a recognition rate for multi-lingual namecards.

Automatic Extraction of Component Window for Auto-Teaching of PCB Assembly Inspection Machines (PCB 조립검사기의 자동티칭을 위한 부품윈도우 자동추출 방법)

  • Kim, Jun-Oh;Park, Tae-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.11
    • /
    • pp.1089-1095
    • /
    • 2010
  • We propose an image segmentation method for auto-teaching system of PCB (Printed Circuit Board) assembly inspection machines. The inspection machine acquires images of all components in PCB, and then compares each image with its standard image to find the assembly errors such as misalignment, inverse polarity, and tombstone. The component window that is the area of component to be acquired by camera, is one of the teaching data for operating the inspection machines. To reduce the teaching time of the machine, we newly develop the image processing method to extract the component window automatically from the image of PCB. The proposed method segments the component window by excluding the soldering parts as well as board background. We binarize the input image by use of HSI color model because it is difficult to discriminate the RGB colors between components and backgrounds. The linear combination of the binarized images then enhances the component window from the background. By use of the horizontal and vertical projection of histogram, we finally obtain the component widow. The experimental results are presented to verify the usefulness of the proposed method.