• Title/Summary/Keyword: RGB camera

Search Result 316, Processing Time 0.022 seconds

Real-time Multiple Stereo Image Synthesis using Depth Information (깊이 정보를 이용한 실시간 다시점 스테레오 영상 합성)

  • Jang Se hoon;Han Chung shin;Bae Jin woo;Yoo Ji sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.4C
    • /
    • pp.239-246
    • /
    • 2005
  • In this paper. we generate a virtual right image corresponding to the input left image by using given RGB texture data and 8 bit gray scale depth data. We first transform the depth data to disparity data and then produce the virtual right image with this disparity. We also proposed a stereo image synthesis algorithm which is adaptable to a viewer's position and an real-time processing algorithm with a fast LUT(look up table) method. Finally, we could synthesize a total of eleven stereo images with different view points for SD quality of a texture image with 8 bit depth information in a real time.

Height Estimation using Kinect in the Indoor (키넥트를 이용한 실내에서의 키 추정 방법)

  • Kim, Sung-Min;Song, Jong-Kwan;Yoon, Byung-Woo;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.3
    • /
    • pp.343-350
    • /
    • 2014
  • Object recognition is one of the key technologies of the monitoring system for the prevention of crimes diversified the intelligent. The height is one of the physical information of the person, it may be important information to confirm the identity with physical characteristics of the subject has. In this paper, we provide a method of measuring the height that utilize RGB-Depth camera, the Kinect. Given that in order to measure the height of a person, and know the height of Kinect, by using the depth information of Kinect the distance to the head and foot of Kinect, estimating the height of a person. The proposed method throughout the experiment confirms that it is effective to estimate the height of a person in the room.

A Study on Color Management of Input and Output Device in Electronic Publishing (II) (전자출판에서 입.출력 장치의 컬러 관리에 관한 연구 (II))

  • Cho, Ga-Ram;Koo, Chul-Whoi
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.25 no.1
    • /
    • pp.65-80
    • /
    • 2007
  • The input and output device requires precise color representation and CMS (Color Management System) because of the increasing number of ways to apply the digital image into electronic publishing. However, there are slight differences in the device dependent color signal among the input and output devices. Also, because of the non-linear conversion of the input signal value to the output signal value, there are color differences between the original copy and the output copy. It seems necessary for device-dependent color information values to change into device-independent color information values. When creating an original copy through electronic publishing, there should be color management with the input and output devices. From the devices' three phases of calibration, characterization and color conversion, the device-dependent color should undergo a color transformation into a device-independent color. In this paper, an experiment was done where the input device used the linear multiple regression and the sRGB color space to perform a color transformation. The output device used the GOG, GOGO and sRGB for the color transformation. After undergoing a color transformation in the input and output devices, the best results were created when the original target underwent a color transformation by the scanner and digital camera input device by the linear multiple regression, and the LCD output device underwent a color transformation by the GOG model.

  • PDF

Distance Measuring Method for Motion Capture Animation (모션캡쳐 애니메이션을 위한 거리 측정방법)

  • Lee, Heei-Man;Seo, Jeong-Man;Jung, Suun-Key
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.129-138
    • /
    • 2002
  • In this paper, a distance measuring algorithm for motion capture using color stereo camera is proposed. The color markers attached on articulations of an actor are captured by stereo color video cameras, and color region which has the same color of the marker's color in the captured images is separated from the other colors by finding dominant wavelength of colors. Color data in RGB (red, green, blue) color space is converted into CIE (Commission Internationale del'Eclairage) color space for the purpose of calculating wavelength. The dominant wavelength is selected from histogram of the neighbor wavelengths. The motion of the character in the cyber space is controlled by a program using the distance information of the moving markers.

Design and Implementation of Multimedia Functional Module for Digital TV (디지털 TV용 멀티미디어 부가기능 모듈의 설계 및 구현)

  • 김익환;최재승;임영철;남재열;하영호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.231-237
    • /
    • 2004
  • Current paper introduces the multimedia functional module and related interface development for digital TV. The module is developed for displaying the image captured by digital still camera, camcorder, or PC in the digital TV. For these purposes, the module has the interface circuit for accessing five media type of memory cards. It decodes JPEG, BMP, or TIFF image date saved in the memory card and converts the image data to analog RGB signal. It also supports three types of output image size from HD(High Definition) to WXGA(Wide Extended Graphics Array) resolution. So the introduced module could be adopted in all kinds of digital TV set.

Parameter of intencity DN Transformation between Aerial image and Terrestrial image (항공영상과 지상영상간 밴드별 변환 파라미터 산정)

  • Heo, Kyung-Jin;Seo, Su-Young
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2010.06a
    • /
    • pp.130-136
    • /
    • 2010
  • This study estimates and evaluates the parameters to relate spectral intensities of aerial and terrestrial images through spectral analysis of each band. For the experiment, an aerial image covering the headquater of the Kyungpook National University was used and terrestrial images were taken by the Sony DSC-F828 DSLR camera. For finding the spectral correspondence, gray intensity, RGB variance, mean, standard deviation were computed, from which parameters of a linear model between patches of both images were computed and evaluated using check patches.

  • PDF

Intrusion Detection Algorithm based on Motion Information in Video Sequence (비디오 시퀀스에서 움직임 정보를 이용한 침입탐지 알고리즘)

  • Kim, Alla;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.2
    • /
    • pp.284-288
    • /
    • 2010
  • Video surveillance is widely used in establishing the societal security network. In this paper, intrusion detection based on visual information acquired by static camera is proposed. Proposed approach uses background model constructed by approximated median filter(AMF) to find a foreground candidate, and detected object is calculated by analyzing motion information. Motion detection is determined by the relative size of 2D object in RGB space, finally, the threshold value for detecting object is determined by heuristic method. Experimental results showed that the performance of intrusion detection is better one when the spatio-temporal candidate informations change abruptly.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

All-In-One Observing Software for Small Telescope

  • Han, Jimin;Pak, Soojong;Ji, Tae-Geun;Lee, Hye-In;Byeon, Seoyeon;Ahn, Hojae;Im, Myungshin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.57.2-57.2
    • /
    • 2018
  • In astronomical observation, sequential device control and real-time data processing are important to maximize observing efficiency. We have developed series of automatic observing software (KAOS, KHU Automatic Observing Software), e.g. KAOS30 for the 30 inch telescope in the McDonald Observatory and KAOS76 for the 76 cm telescope in the KHAO. The series consist of four packages: the DAP (Data Acquisition Package) for CCD Camera control, the TCP (Telescope Control Package) for telescope control, the AFP (Auto Focus Package) for focusing, and the SMP (Script Mode Package) for automation of sequences. In this poster, we introduce KAOS10 which is being developed for controlling a small telescope such as aperture size of 10 cm. The hardware components are the QHY8pro CCD, the QHY5-II CMOS, the iOptron CEM 25 mount, and the Stellarvue SV102ED telescope. The devices are controlled on ASCOM Platform. In addition to the previous packages (DAP, SMP, TCP), KAOS10 has QLP (Quick Look Package) and astrometry function in the TCP. QHY8pro CCD has RGB Bayer matrix and the QLP transforms RGB images into BVR images in real-time. The TCP includes astrometry function which adjusts the telescope position by comparing the image with a star catalog. In the future, We expect KAOS10 be used on the research of transient objects such as a variable star.

  • PDF