• Title/Summary/Keyword: YCbCr Image

Search Result 126, Processing Time 0.026 seconds

CRT-Based Color Image Zero-Watermarking on the DCT Domain

  • Kim, HyoungDo
    • International Journal of Contents
    • /
    • v.11 no.3
    • /
    • pp.39-46
    • /
    • 2015
  • When host images are watermarked with CRT (Chinese Remainder Theorem), the watermark images are still robust in spite of the damage of the host images by maintaining the remainders in an unchanged state within some range of the changes that are incurred by the attacks. This advantage can also be attained by "zero-watermarking," which does not change the host images in any way. This paper proposes an improved zero-watermarking scheme for color images on the DCT (Discrete Cosine Transform) domain that is based on the CRT. In the scheme, RGB images are converted into YCbCr images, and one channel is used for the DCT transformation. A key is then computed from the DC and three low-frequency AC values of each DCT block using the CRT. The key finally becomes the watermark key after it is combined four times with a scrambled watermark image. When watermark images are extracted, each bit is determined by majority voting. This scheme shows that watermark images are robust against a number of common attacks such as sharpening, blurring, JPEG lossy compression, and cropping.

Text Extraction Algorithm in Complex Images using Adaptive Edge detection (복잡한 영상에서 적응적 에지검출을 이용한 텍스트 추출 알고리즘 연구)

  • Shin, Seong;Kim, Sung-Dong;Baek, Young-Hyun;Moon, Sung-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.251-252
    • /
    • 2007
  • The thesis proposed the Text Extraction Algorithm which is a text extraction algorithm which uses the Coiflet Wavelet, YCbCr Color model and the close curve edge feature of adaptive LoG Operator in order to complement the demerit of the existing research which is weak in complexity of background, variety of light and disordered line and similarity of text and background color. This thesis is simulated with natural images which include naturally text area regardless of size, resolution and slant and so on of image. And the proposed algorithm is confirmed to an excellent by compared with an existing extraction algorithm in same image.

  • PDF

Robust Face Detection Using Illumination-Compensation and Morphological Processing

  • Yun, Jae-Ung;Lee, Hyung-Jin;Paul, Anjan Kumar;Baek, Joong-Hwan
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.329-330
    • /
    • 2007
  • This paper presents a simple and robust face detection algorithm that can be utilized to video summary. We firstly apply the Illumination-compensation process for reducing the effect of brightness on the image. And then, we analyze the face region based on color in the YCbCr space to obtain the skin color. Also, we try the morphological image processing called closing algorithm to improve the detection. Experimental results demonstrate the effectiveness of our face detection algorithm that leads to 96.7 % precision ratio on average.

  • PDF

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.

A Robust Face Detection Method Based on Skin Color and Edges

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.141-156
    • /
    • 2013
  • In this paper we propose a method to detect human faces in color images. Many existing systems use a window-based classifier that scans the entire image for the presence of the human face and such systems suffers from scale variation, pose variation, illumination changes, etc. Here, we propose a lighting insensitive face detection method based upon the edge and skin tone information of the input color image. First, image enhancement is performed, especially if the image is acquired from an unconstrained illumination condition. Next, skin segmentation in YCbCr and RGB space is conducted. The result of skin segmentation is refined using the skin tone percentage index method. The edges of the input image are combined with the skin tone image to separate all non-face regions from candidate faces. Candidate verification using primitive shape features of the face is applied to decide which of the candidate regions corresponds to a face. The advantage of the proposed method is that it can detect faces that are of different sizes, in different poses, and that are making different expressions under unconstrained illumination conditions.

CNN-Based Fake Image Identification with Improved Generalization (일반화 능력이 향상된 CNN 기반 위조 영상 식별)

  • Lee, Jeonghan;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1624-1631
    • /
    • 2021
  • With the continued development of image processing technology, we live in a time when it is difficult to visually discriminate processed (or tampered) images from real images. However, as the risk of fake images being misused for crime increases, the importance of image forensic science for identifying fake images is emerging. Currently, various deep learning-based identifiers have been studied, but there are still many problems to be used in real situations. Due to the inherent characteristics of deep learning that strongly relies on given training data, it is very vulnerable to evaluating data that has never been viewed. Therefore, we try to find a way to improve generalization ability of deep learning-based fake image identifiers. First, images with various contents were added to the training dataset to resolve the over-fitting problem that the identifier can only classify real and fake images with specific contents but fails for those with other contents. Next, color spaces other than RGB were exploited. That is, fake image identification was attempted on color spaces not considered when creating fake images, such as HSV and YCbCr. Finally, dropout, which is commonly used for generalization of neural networks, was used. Through experimental results, it has been confirmed that the color space conversion to HSV is the best solution and its combination with the approach of increasing the training dataset significantly can greatly improve the accuracy and generalization ability of deep learning-based identifiers in identifying fake images that have never been seen before.

New Prefiltering Methods based on a Histogram Matching to Compensate Luminance and Chrominance Mismatch for Multi-view Video (다시점 비디오의 휘도 및 색차 성분 불일치 보상을 위한 히스토그램 매칭 기반의 전처리 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.127-136
    • /
    • 2010
  • In multi-view video, illumination disharmony between neighboring views can occur on account of different location of each camera and imperfect camera calibration, and so on. Such discrepancy can be the cause of the performance decrease of multi-view video coding by mismatch of inter-view prediction which refer to the pictures obtained from the neighboring views at the same time. In this paper, we propose an efficient histogram-based prefiltering algorithm to compensate mismatches between the luminance and chrominance components in multi-view video for improving its coding efficiency. To compensate illumination variation efficiently, all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching. A Cosited filter that is used for chroma subsampling in many video encoding schemes is applied to each color component prior to histogram matching to improve its performance. The histogram matching is carried out in the RGB color space after color space converting from YCbCr color space. The effective color conversion skill that has respect to direction of edge and range of pixel value in an image is employed in the process. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with other methods.

Face Recognition System Based on the Embedded LINUX (임베디드 리눅스 기반의 눈 영역 비교법을 이용한 얼굴인식)

  • Bae, Eun-Dae;Kim, Seok-Min;Nam, Boo-Hee
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.120-121
    • /
    • 2006
  • In this paper, We have designed a face recognition system based on the embedded Linux. This paper has an aim in embedded system to recognize the face more exactly. At first, the contrast of the face image is adjusted with lightening compensation method, the skin and lip color is founded based on YCbCr values from the compensated image. To take advantage of the method based on feature and appearance, these methods are applied to the eyes which has the most highly recognition rate of all the part of the human face. For eyes detecting, which is the most important component of the face recognition, we calculate the horizontal gradient of the face image and the maximum value. This part of the face is resized for fitting the eye image. The image, which is resized for fit to the eye image stored to be compared, is extracted to be the feature vectors using the continuous wavelet transform and these vectors are decided to be whether the same person or not with PNN, to miminize the error rate, the accuracy is analyzed due to the rotation or movement of the face. Also last part of this paper we represent many cases to prove the algorithm contains the feature vector extraction and accuracy of the comparison method.

  • PDF

Efficient Object Localization using Color Correlation Back-projection (칼라 상관관계 역투영법을 적용한 효율적인 객체 지역화 기법)

  • Lee, Yong-Hwan;Cho, Han-Jin;Lee, June-Hwan
    • Journal of Digital Convergence
    • /
    • v.14 no.5
    • /
    • pp.263-271
    • /
    • 2016
  • Localizing an object in image is a common task in the field of computer vision. As the existing methods provide a detection for the single object in an image, they have an utilization limit for the use of the application, due to similar objects are in the actual picture. This paper proposes an efficient method of object localization for image recognition. The new proposed method uses color correlation back-projection in the YCbCr chromaticity color space to deal with the object localization problem. Using the proposed algorithm enables users to detect and locate primary location of object within the image, as well as candidate regions can be detected accurately without any information about object counts. To evaluate performance of the proposed algorithm, we estimate success rate of locating object with common used image database. Experimental results reveal that improvement of 21% success ratio was observed. This study builds on spatially localized color features and correlation-based localization, and the main contribution of this paper is that a different way of using correlogram is applied in object localization.

Optimized Hardware Design using Sobel and Median Filters for Lane Detection

  • Lee, Chang-Yong;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.9 no.1
    • /
    • pp.115-125
    • /
    • 2019
  • In this paper, the image is received from the camera and the lane is sensed. There are various ways to detect lanes. Generally, the method of detecting edges uses a lot of the Sobel edge detection and the Canny edge detection. The minimum use of multiplication and division is used when designing for the hardware configuration. The images are tested using a black box image mounted on the vehicle. Because the top of the image of the used the black box is mostly background, the calculation process is excluded. Also, to speed up, YCbCr is calculated from the image and only the data for the desired color, white and yellow lane, is obtained to detect the lane. The median filter is used to remove noise from images. Intermediate filters excel at noise rejection, but they generally take a long time to compare all values. In this paper, by using addition, the time can be shortened by obtaining and using the result value of the median filter. In case of the Sobel edge detection, the speed is faster and noise sensitive compared to the Canny edge detection. These shortcomings are constructed using complementary algorithms. It also organizes and processes data into parallel processing pipelines. To reduce the size of memory, the system does not use memory to store all data at each step, but stores it using four line buffers. Three line buffers perform mask operations, and one line buffer stores new data at the same time as the operation. Through this work, memory can use six times faster the processing speed and about 33% greater quantity than other methods presented in this paper. The target operating frequency is designed so that the system operates at 50MHz. It is possible to use 2157fps for the images of 640by360 size based on the target operating frequency, 540fps for the HD images and 240fps for the Full HD images, which can be used for most images with 30fps as well as 60fps for the images with 60fps. The maximum operating frequency can be used for larger amounts of the frame processing.