• Title/Summary/Keyword: RGB camera

Search Result 316, Processing Time 0.027 seconds

A Study on the Optimization of color in Digital Printing (디지털 인쇄에 있어서 컬러의 최적화에 관한 연구)

  • Kim, Jae-Hae;Lee, Sung-Hyung;Cho, Ga-Ram;Koo, Chul-Whoi
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.26 no.1
    • /
    • pp.51-64
    • /
    • 2008
  • In this paper, an experiment was done where the input(scanner, digital still camera) and monitor(CRT, LCD) device used the linear multiple regression and the GOG (Gain-Offset-Gamma) characterization model to perform a color transformation. Also to color conversion method of the digital printer it used the LUT(Look Up Table), 3dimension linear interpolation and a tetrahedron interpolation method. The results are as follows. From color reappearance of digital printing case of monitor, the XYZ which it converts in linear multiple regression of input device it multiplied the inverse matrix, and then it applies the inverse GOG model and after color converting the patch of the result most which showed color difference below 5 at monitor RGB value. Also, The XYZ which is transmitted from the case input device which is a printer it makes at LAB value to convert an extreme, when the LAB value which is converted calculating the CMY with the LUT and tetrahedral interpolations the color conversion which considers the black quantity was more accurate.

  • PDF

Development of Objective Algorithm for Cloudiness using All-Sky Digital Camera (전천 카메라 영상을 이용한 자동 운량 분석)

  • Kim, Yun Mi;Kim, Jhoon;Cho, Hi Ku
    • Atmosphere
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2008
  • The cloud amount, one of the basic parameter in atmospheric observation, have been observed by naked eyes of observers, which is affected by the subjective view. In order to ensure reliable and objective observation, a new algorithm to retrieve cloud amount was constructed using true color images composed of red, green and blue (RGB). The true color image is obtained by the Skyview, an all-sky imager taking pictures of sky, at the Science Building of Yonsei University, Seoul for a year in 2006. The principle of distinguishing clear sky from cloudy sky lies in the fact that the spectral characteristics of light scattering is different for air molecules and cloud. The result of Skyview's algorithm showed about 77% agreement between the observed cloud amount and the calculated, for the error range, the difference between calculated and observed cloudiness, within ${\pm}2$. Seasonally, the best accuracy of about 83% was obtained within ${\pm}2$ range in summer when the cloud amounts are higher, thus better signal-to-noise ratio. Furthermore, as the sky turbidity increased, the error also increased because of increased scattering which can explain the large error in spring. The algorithm still need to be improved in classifying sky condition more systematically with other complimentary instruments to discriminate thin cloud from haze to reduce errors in detecting clouds.

Automatic Extraction of Component Window for Auto-Teaching of PCB Assembly Inspection Machines (PCB 조립검사기의 자동티칭을 위한 부품윈도우 자동추출 방법)

  • Kim, Jun-Oh;Park, Tae-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.11
    • /
    • pp.1089-1095
    • /
    • 2010
  • We propose an image segmentation method for auto-teaching system of PCB (Printed Circuit Board) assembly inspection machines. The inspection machine acquires images of all components in PCB, and then compares each image with its standard image to find the assembly errors such as misalignment, inverse polarity, and tombstone. The component window that is the area of component to be acquired by camera, is one of the teaching data for operating the inspection machines. To reduce the teaching time of the machine, we newly develop the image processing method to extract the component window automatically from the image of PCB. The proposed method segments the component window by excluding the soldering parts as well as board background. We binarize the input image by use of HSI color model because it is difficult to discriminate the RGB colors between components and backgrounds. The linear combination of the binarized images then enhances the component window from the background. By use of the horizontal and vertical projection of histogram, we finally obtain the component widow. The experimental results are presented to verify the usefulness of the proposed method.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.

Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication (스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발)

  • Oh, Byung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.143-150
    • /
    • 2017
  • In this paper, we propose a novel motion recognition platform using smart-phone tracking and color communication. The interface requires only a camera and a personal smart-phone to provide a motion control interface rather than expensive equipment. The platform recognizes the user's gestures by the tracking 3D distance and the rotation angle of the smart-phone, which acts essentially as a motion controller in the user's hand. Also, a color coded communication method using RGB color combinations is included within the interface. Users can conveniently send or receive any text data through this function, and the data can be transferred continuously even while the user is performing gestures. We present the result that implementation of viable contents based on the proposed motion recognition platform.

Position Tracking of Underwater Robot for Nuclear Reactor Inspection using Color Information (색상정보를 이용한 원자로 육안검사용 수중로봇의 위치 추적)

  • 조재완;김창회;서용칠;최영수;김승호
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2259-2262
    • /
    • 2003
  • This paper describes visual tracking procedure of the underwater mobile robot for nuclear reactor vessel inspection, which is required to find the foreign objects such as loose parts. The yellowish underwater robot body tend to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color informations, yellow and indigo. The center coordinates extraction procedures is as follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences: binarization labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth.

  • PDF

Image Retrieval Using Histogram Refinement Based on Local Color Difference (지역 색차 기반의 히스토그램 정교화에 의한 영상 검색)

  • Kim, Min-KI
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1453-1461
    • /
    • 2015
  • Since digital images and videos are rapidly increasing in the internet with the spread of mobile computers and smartphones, research on image retrieval has gained tremendous momentum. Color, shape, and texture are major features used in image retrieval. Especially, color information has been widely used in image retrieval, because it is robust in translation, rotation, and a small change of camera view. This paper proposes a new method for histogram refinement based on local color difference. Firstly, the proposed method converts a RGB color image into a HSV color image. Secondly, it reduces the size of color space from 2563 to 32. It classifies pixels in the 32-color image into three groups according to the color difference between a central pixel and its neighbors in a 3x3 local region. Finally, it makes a color difference vector(CDV) representing three refined color histograms, then image retrieval is performed by the CDV matching. The experimental results using public image database show that the proposed method has higher retrieval accuracy than other conventional ones. They also show that the proposed method can be effectively applied to search low resolution images such as thumbnail images.

Automatic Extraction and Measurement of Visual Features of Mushroom (Lentinus edodes L.) (표고 외관 특징점의 자동 추출 및 측정)

  • Hwang, Heon;Lee, Yong-Guk
    • Journal of Bio-Environment Control
    • /
    • v.1 no.1
    • /
    • pp.37-51
    • /
    • 1992
  • Quantizing and extracting visual features of mushroom(Lentinus edodes L.) are crucial to the sorting and grading automation, the growth state measurement, and the dried performance indexing. A computer image processing system was utilized for the extraction and measurement of visual features of front and back sides of the mushroom. The image processing system is composed of the IBM PC compatible 386DK, ITEX PCVISION Plus frame grabber, B/W CCD camera, VGA color graphic monitor, and image output RGB monitor. In this paper, an automatic thresholding algorithm was developed to yield the segmented binary image representing skin states of the front and back sides. An eight directional Freeman's chain coding was modified to solve the edge disconnectivity by gradually expanding the mask size of 3$\times$3 to 9$\times$9. A real scaled geometric quantity of the object was directly extracted from the 8-directional chain element. The external shape of the mushroom was analyzed and converted to the quantitative feature patterns. Efficient algorithms for the extraction of the selected feature patterns and the recognition of the front and back side were developed. The developed algorithms were coded in a menu driven way using MS_C language Ver.6.0, PC VISION PLUS library fuctions, and VGA graphic functions.

  • PDF

Fall Detection based on Fish-eye Lens Camera Image and Perspective Image (어안렌즈 카메라 영상과 투시영상을 이용한 기절동작 인식)

  • So, In-Mi;Kim, Young-Un;Kang, Sun-Kyung;Han, Dae-Gyeong;Jung, Sung-Tae
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.468-471
    • /
    • 2008
  • 이 논문은 응급상황을 인식하기 위하여 어안렌즈를 통해 획득된 영상을 이용하여 기절 동작을 인식하는 방법을 제안한다. 거실의 천장 중앙에 위치한 어안렌즈(fish-eye lens)를 장착한 카메라로부터 화각이 170인 RGB 컬러 모델의 어안 영상을 입력 받은 뒤, 가우시안 혼합 모델 기반의 적응적 배경 모델링 방법을 이용하여 동적으로 배경 영상을 갱신한다. 입력 영상의 평균 밝기를 구하고 평균 밝기가 급격하게 변화하지 않도록 영상 픽셀을 보정한 뒤, 입력 영상과 배경 영상과 차이가 큰 픽셀을 찾음으로써 움직이는 객체를 추출하였다. 그리고 연결되어 있는 전경 픽셀 영역들의 외곽점들을 추적하여 타원으로 매핑하고 움직이는 객체 영역의 형태를 단순화하였다. 이 타원을 추적하면서 어안 렌즈 영상을 투시 영상으로 변환한 다음 타원의 크기 변화, 위치 변화, 이동 속도 정보를 추출하여 이동과 정지 및 움직임이 기절동작과 유사한지를 판단하도록 하였다. 본 논문에서는 실험자로 하여금 기절동작, 걷기 동작, 앉기 동작 등 여러 동작을 취하게 하고 기절 동작 인식을 실험하였다. 실험 결과 어안 렌즈 영상을 그대로 사용하는 것보다 투시 영상으로 변환하여 타원의 크기변화, 위치변화, 이동속도 정보를 이용하는 것이 높은 인식률을 보였다.

  • PDF

Hybrid Silhouette Extraction Using Color and Gradient Informations (색상 및 기울기 정보를 이용한 인간 실루엣 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.913-918
    • /
    • 2007
  • Human motion analysis is an important research subject in human-robot interaction (HRI). However, before analyzing the human motion, silhouette of human body should be extracted from sequential images obtained by CCD camera. The intelligent robot system requires more robust silhouette extraction method because it has internal vibration and low resolution. In this paper, we discuss the hybrid silhouette extraction method for detecting and tracking the human motion. The proposed method is to combine and optimize the temporal and spatial gradient information. Also, we propose some compensation methods so as not to miss silhouette information due to poor images. Finally, we have shown the effectiveness and feasibility of the proposed method through some experiments.