• Title/Summary/Keyword: RGB color image

Search Result 483, Processing Time 0.027 seconds

Adaptive Smoothing Algorithm Based on Censoring for Removing False Color Noise Caused by De-mosaicing on Bayer Pattern CFA (Bayer 패턴의 de-mosaicing 과정에서 발생하는 색상잡음 제거를 위한 검열기반 적응적 평탄화 기법)

  • Hwang, Sung-Hyun;Kim, Chae-Sung;Moon, Ji-He
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.403-406
    • /
    • 2005
  • The purpose of this paper is to propose ways to remove false color noise (FCN) generated during de-mosaicing on RGB Bayer pattern images. In case of images sensors adapting Bayer pattern color filters array (CFA), de-mosaicing is conducted to recover the RGB color data in single pixels. Here, FCN phenomena would occur where there is clearer silhouette or contrast of colors. The FCN phenomena found during de-mosaicking process appears locally in the edges inside the image and the proposed method of eliminating this is to convert RGB color space to YCbCr space to conduct smoothing process. Moreover, for edges where different colors come together, censoring based smoothing technique is proposed as a way to minimize color blurring effect.

  • PDF

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Digital Watermarking on the Color coordinate (칼라 좌표계에서의 디지털 워크마킹)

  • Lee Chang-Soon;Jung Song-Ju
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.10 no.2
    • /
    • pp.102-108
    • /
    • 2005
  • CIELAB coordinate is represented by one lightness component and two chromaticity components and similar to human visual system. Visual devices such as computer monitor display images using RGB coordinate. We propose a technique for inserting the watermark of visually recognizable mark into the middle frequency domain of image. RGB coordinate image is transformed into CIELAB coordinate, which include the characteristics of Human vision and then a* component is transformed into DFT(Discrete Fourier transform) transform.

  • PDF

Object tracking algorithm through RGB-D sensor in indoor environment (실내 환경에서 RGB-D 센서를 통한 객체 추적 알고리즘 제안)

  • Park, Jung-Tak;Lee, Sol;Park, Byung-Seo;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.248-249
    • /
    • 2022
  • In this paper, we propose a method for classifying and tracking objects based on information of multiple users obtained using RGB-D cameras. The 3D information and color information acquired through the RGB-D camera are acquired and information about each user is stored. We propose a user classification and location tracking algorithm in the entire image by calculating the similarity between users in the current frame and the previous frame through the information on the location and appearance of each user obtained from the entire image.

  • PDF

Vehicle Color Recognition Using Neural-Network (신경회로망을 이용한 차량의 색상 인식)

  • Kim, Tae-hyung;Lee, Jung-hwa;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.731-734
    • /
    • 2009
  • In this paper, we propose the method the vehicle color recognizing in the image including a vehicle. In an image, the color feature vector of a vehicle is extracted and by using the backpropagation learning algorithm, that is the multi-layer perceptron, the recognized vehicle color. By using the RGB and HSI color model the feature vector used as the input of the backpropagation learning algorithm is the feature of the color used as the input of the neural network. The color of a vehicle recognizes as the white, the silver color, the black, the red, the yellow, the blue, and the green among the color of the vehicle most very much found out as 7 colors. By using the image including a vehicle for the performance evaluation of the method proposing, the color recognition performance was experimented.

  • PDF

A Color Image Segmentation Algorithm based on Region Merging using Hue Differences (색상 차를 이용하는 영역 병합에 기반한 칼라영상 분할 알고리즘)

  • 박영식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.63-71
    • /
    • 2003
  • This paper describes a color image segmentation algorithm based on region merging using hue difference as a restrictive condition. The proposed algorithm using mathematical morphology and a modified watershed algorithm does over-segmentation in the RGB space to preserve contour information of regions. Then, the segmentation result of color image is acquired by repeated region merging using hue differences as a restrictive condition. This stems from human visual system based on hue, saturation, and intensity. Hue difference between two regions is used as a restrictive condition for region merging because it becomes more important factor than color difference if intensity is not low. Simulation results show that the proposed color image segmentation algorithm provides efficient segmentation results with the predefined number of regions for various color images.

Full-Color AMOLED with RGBW Pixel Pattern

  • Amold, A.D.;Hatwar, T.K.;Hettel, M.V.;Kane, P.J.;Miller, M.E.;Murdoch, M.J.;Spindler, J.P.;Slyke, S.A. Van;Mameno, K.;Nishikawa, R.;Omura, T.;Matsumoto, S.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.808-811
    • /
    • 2004
  • A full-color AMOLED display with an RGBW color filter pattern has been fabricated. Displays with this format require about $^1/_2$ the power of analogous RGB displays. RGBW and RGB 2.16inch diagonal displays with average power consumptions of 180 mW and 340 mW, respectively, are demonstrated for a set of standard digital still camera images at a luminance of 100 cd/$m^2$. In both cases, a white-emitting AMOLED is used as the light source. The higher efficiency of the RGBW format results because a large fraction of a typical image can be represented as white, and the white sub-pixel in an RGBW AMOLED display is highly efficient because of the absence of any color filter. RGBW and RGB AMOLED displays have the same color gamut and, aside from the power consumption difference, are indistinguishable.

  • PDF

Optimum Parameter Ranges on Highly Preferred Images: Focus on Dynamic Range, Color, and Contrast (선호도 높은 이미지의 최적 파라미터 범위 연구: 다이내믹 레인지, 컬러, 콘트라스트를 중심으로)

  • Park, Hyung-Ju;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.1
    • /
    • pp.9-18
    • /
    • 2013
  • In order to measure the parameters of consumers' preferred image quality, this research suggests image quality assessment factors; dynamic range, color, and contrast. They have both physical image quality factors and psychological characteristics from the previous researches. We found out the specific ranges of preferred image quality metrics. As a result, Digital Zone System meant for dynamic range generally shows 6~10 stop ranges in portrait, nightscape, and landscape. Total RGB mean values represent in portrait (67.2~215.2), nightscape (46~142), and landscape (52~185). Portrait total RGB averages have the widest range, landscape, and nightscape, respectively. Total scene contrast ranges show in portrait (196~589), nightscape (131~575), and landscape (104~767). Especially in portrait, skin tone RGB mean values are in ZONE V as the exposure standard, but practically image consumers' preferred skin tone level is in ZONE IV. Also, total scene versus main subject contrast ratio represents 1:1.2; therefore, we conclude that image consumers prefer the out-of-focus effect in portrait. Throughout this research, we can measure the preferred image quality metrics ranges. Also, we expect the practical and specific dynamic range, color, and contrast information of preferred image quality to positively influence product development.

A Color Image Segmentation Using Mean Shift and Region merging method (Mean Shift와 영역병합을 이용한 칼라 영상 분할)

  • Kwak, Nae-Joung;Kwon, Dong-Jin;Kim, Young-Gil
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.401-404
    • /
    • 2006
  • Mean shift procedure is applied for the data points in the joint spatial-range domain and achieves a high quality. However, a color image is segmented differently according to the inputted spatial parameter or range parameter and the demerit is that the image is broken into many small regions in case of the small parameter. In this paper, to improve this demerit, we propose the method that groups similar regions using region merging method for over-segmented images. The proposed method converts a over-segmented image in RGB color space into in HSI color space and merges similar regions by hue information. Here, to preserve edge information, the proposed method use by merging constraints to decide whether regions is merged or not. After then, we merge the regions in RGB color space for non-processed regions in HSI color space. Experimental results show the superiority in region's segmentation results.

  • PDF

Creating Atmospheric Scattering Corrected True Color Image from the COMS/GOCI Data (천리안위성 해양탑재체 자료를 이용한 대기산란 효과가 제거된 컬러합성 영상 제작)

  • Lee, Kwon-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.16 no.1
    • /
    • pp.36-46
    • /
    • 2013
  • The Geostationary Ocean Color Imager (GOCI), the first geostationary ocean color observation instrument launched in 2010 on board the Communication, Ocean, and Meteorological Satellite (COMS), has been generating the operational level 1 data. This study describes a methodology for creating the GOCI true color image and data processing software, namely the GOCI RGB maker. The algorithm uses a generic atmospheric correction and reprojection technique to produce the color composite image. Especially, the program is designed for educational purpose in a way that the region of interest and image size can be determined by the user. By distributing software to public, it would maximize the understanding and utilizing the GOCI data. Moreover, images produced from the geostationary observations are expected to be an excellent tool for monitoring environmental changes.