• Title/Summary/Keyword: RGB camera

Search Result 316, Processing Time 0.025 seconds

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

A Study on Color Management of Input and Output Device in Electronic Publishing (I) (전자출판에서 입.출력 장치의 컬러 관리에 관한 연구 (I))

  • Cho, Ga-Ram;Kim, Jae-Hae;Koo, Chul-Whoi
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.25 no.1
    • /
    • pp.11-26
    • /
    • 2007
  • In this paper, an experiment was done where the input device used the linear multiple regression and the sRGB color space to perform a color transformation. The output device used the GOG, GOGO and sRGB for the color transformation. After the input device underwent a color transformation, a $3\;{\times}\;20\;size$ matrix was used in a linear multiple regression and the scanner's color representation of scanner was better than a digital still camera's color representation. When using the sRGB color space, the original copy and the output copy had a color difference of 11. Therefore it was more efficient to use the linear multiple regression method than using the sRGB color space. After the input device underwent a color transformation, the additivity of the LCD monitor's R, G and B signal value improved and therefore the error in the linear formula transformation decreased. From this change, the LCD monitor with the GOG model applied to the color transformation became better than LCD monitors with other models applied to the color transformation. Also, the color difference varied more than 11 from the original target in CRT and LCD monitors when a sRGB color transformation was done in restricted conditions.

  • PDF

Alignment of Convergent Multi-view Depth Map in Based on the Camera Intrinsic Parameter (카메라의 내부 파라미터를 고려한 수렴형 다중 깊이 지도의 정렬)

  • Lee, Kanghoon;Park, Jong-Il;Shin, Hong-Chang;Bang, Gun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.457-459
    • /
    • 2015
  • 본 논문에서는 원의 호 곡선에 따라 배치된 다중 RGB 카메라 영상으로 생성한 깊이 지도를 정렬하는 방법을 제안한다. 원의 호 곡선에 따라 배치된 카메라는 각 카메라의 광축이 한 점으로 만나서 수렴하는 형태가 이상적이다. 그러나 카메라 파라미터를 살펴보면 광축이 서로 수렴하지 않는다. 또한 카메라 파라미터는 오차가 존재하고 내부 파라미터도 서로 다르기 때문에 각 카메라 영상들은 수평과 수직 오차가 발생한다. 이와 같은 문제점을 해결하기 위해 첫 번째로 광축이 한 점으로 수렴하기 위해서 카메라 외부 파라미터를 보정하여 깊이 영상 정렬을 하였다. 두 번째로 내부 파라미터를 수정하여 각 깊이 영상들의 수평과 수직 오차를 감소시켰다. 일반적으로 정렬된 깊이 지도를 얻기 위해서는 초기 RGB 카메라 영상으로 정렬을 수행하고 그 결과 영상으로 깊이 영상을 생성한다. 하지만 RGB 영상으로 카메라의 회전과 위치를 보정하여 정렬하면 카메라 위치 변화에 따른 깊이 지도 변화값 적용이 복잡해 진다. 즉 정렬 계산 과정에서 소수점 단위 값이 사라지기에 최종 깊이 지도의 값에 영향을 미친다. 그래서 RGB 영상으로 깊이 지도를 생성하고 그것을 처음 RGB 카메라 파라미터로 워핑(warping)하였다. 그리고 워핑된 깊이 지도 값을 가지고 정렬을 수행하였다.

  • PDF

Online Monitoring System based notifications on Mobile devices with Kinect V2 (키넥트와 모바일 장치 알림 기반 온라인 모니터링 시스템)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1183-1188
    • /
    • 2016
  • Kinect sensor version 2 is a kind of camera released by Microsoft as a computer vision and a natural user interface for game consoles like Xbox one. It allows acquiring color images, depth images, audio input and skeletal data with a high frame rate. In this paper, using depth image, we present a surveillance system of a certain area within Kinect's field of view. With computer vision library(Emgu CV), if an object is detected in the target area, it is tracked and kinect camera takes RGB image to send it in database server. Therefore, a mobile application on android platform was developed in order to notify the user that Kinect has sensed strange motion in the target region and display the RGB image of the scene. User gets the notification in real-time to react in the best way in the case of valuable things in monitored area or other cases related to a reserved zone.

RGB Camera-based Real-time 21 DoF Hand Pose Tracking (RGB 카메라 기반 실시간 21 DoF 손 추적)

  • Choi, Junyeong;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.942-956
    • /
    • 2014
  • This paper proposes a real-time hand pose tracking method using a monocular RGB camera. Hand tracking has high ambiguity since a hand has a number of degrees of freedom. Thus, to reduce the ambiguity the proposed method adopts the step-by-step estimation scheme: a palm pose estimation, a finger yaw motion estimation, and a finger pitch motion estimation, which are performed in consecutive order. Assuming a hand to be a plane, the proposed method utilizes a planar hand model, which facilitates a hand model regeneration. The hand model regeneration modifies the hand model to fit a current user's hand, and improves robustness and accuracy of the tracking results. The proposed method can work in real-time and does not require GPU-based processing. Thus, it can be applied to various platforms including mobile devices such as Google Glass. The effectiveness and performance of the proposed method will be verified through various experiments.

Fish Injured Rate Measurement Using Color Image Segmentation Method Based on K-Means Clustering Algorithm and Otsu's Threshold Algorithm

  • Sheng, Dong-Bo;Kim, Sang-Bong;Nguyen, Trong-Hai;Kim, Dae-Hwan;Gao, Tian-Shui;Kim, Hak-Kyeong
    • Journal of Power System Engineering
    • /
    • v.20 no.4
    • /
    • pp.32-37
    • /
    • 2016
  • This paper proposes two measurement methods for injured rate of fish surface using color image segmentation method based on K-means clustering algorithm and Otsu's threshold algorithm. To do this task, the following steps are done. Firstly, an RGB color image of the fish is obtained by the CCD color camera and then converted from RGB to HSI. Secondly, the S channel is extracted from HSI color space. Thirdly, by applying the K-means clustering algorithm to the HSI color space and applying the Otsu's threshold algorithm to the S channel of HSI color space, the binary images are obtained. Fourthly, morphological processes such as dilation and erosion, etc. are applied to the binary image. Fifthly, to count the number of pixels, the connected-component labeling is adopted and the defined injured rate is gotten by calculating the pixels on the labeled images. Finally, to compare the performances of the proposed two measurement methods based on the K-means clustering algorithm and the Otsu's threshold algorithm, the edge detection of the final binary image after morphological processing is done and matched with the gray image of the original RGB image obtained by CCD camera. The results show that the detected edge of injured part by the K-means clustering algorithm is more close to real injured edge than that by the Otsu' threshold algorithm.

Multiple Pedestrians Detection and Tracking using Color Information from a Moving Camera (이동 카메라 영상에서 컬러 정보를 이용한 다수 보행자 검출 및 추적)

  • Lim, Jong-Seok;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.317-326
    • /
    • 2004
  • This paper presents a new method for the detection of multiple pedestrians and tracking of a specific pedestrian using color information from a moving camera. We first extract motion vector on the input image using BMA. Next, a difference image is calculated on the basis of the motion vector. The difference image is converted to a binary image. The binary image has an unnecessary noise. So, it is removed by means of the proposed noise deletion method. Then, we detect pedestrians through the projection algorithm. But, if pedestrians are very adjacent to each other, we separate them using RGB color information. And we track a specific pedestrian using RGB color information in center region of it. The experimental results on our test sequences demonstrated the high efficiency of our approach as it had shown detection success ratio of 97% and detection failure ratio of 3% and excellent tracking.

Efficient Mobile Writing System with Korean Input Interface Based on Face Recognition

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.6
    • /
    • pp.49-56
    • /
    • 2020
  • The virtual Korean keyboard system is a method of inputting characters by touching a fixed position. This system is very inconvenient for people who have difficulty moving their fingers. To alleviate this problem, this paper proposes an efficient framework that enables keyboard input and handwriting through video and user motion obtained through the RGB camera of the mobile device. To develop this system, we use face recognition to calculate control coordinates from the input video, and develop an interface that can input and combine Hangul using this coordinate value. The control position calculated based on face recognition acts as a pointer to select and transfer the letters on the keyboard, and finally combines the transmitted letters to integrate them to perform the Hangul keyboard function. The result of this paper is an efficient writing system that utilizes face recognition technology, and using this system is expected to improve the communication and special education environment for people with physical disabilities as well as the general public.

Spectral Reflectance Estimation based on Similar Training Set using Correlation Coefficient (상관 계수를 이용한 유사 모집단 기반의 분광 반사율 추정)

  • Yo, Ji-Hoon;Ha, Ho-Gun;Kim, Dae-Chul;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.142-149
    • /
    • 2013
  • In general, a color of an image is represented by using red, green, and blue channels in a RGB camera system. However, only information of three channels are limited to estimate a spectral reflectance of a real scene. Because of this, the RGB camera system can not accurately represent the color. To overcome this limitation and represent an accurate color, researches to estimate the spectral reflectance by using a multi-channel camera system are being actively proceeded. Recently, a reflectance estimation method adaptively constructing a similar training set from a traditional training set according to a camera response by using a spectral similarity was introduced. However, in this method, an accuracy of the similar training set is reduced because the spectral similarity based on an average and a maximum distances was applied. In this paper, a reflectance estimation method applied a spectral similarity based on a correlation coefficient is proposed to improve the accuracy of the similar training set. Firstly, the correlation coefficient between the similar training set and the spectral reflectance obtained by Wiener estimation method is calculated. Secondly, the similar training set is constructed from the traditional training set according to the correlation coefficient. Finally, Wiener estimation method applied the similar training set is performed to estimate the spectral reflectance. To evaluate a performance of the proposed method with previous methods, experimental results are compared. As a result, the proposed method showed the best performance.

Test of Fault Detection to Solar-Light Module Using UAV Based Thermal Infrared Camera (UAV 기반 열적외선 카메라를 이용한 태양광 모듈 고장진단 실험)

  • LEE, Geun-Sang;LEE, Jong-Jo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.106-117
    • /
    • 2016
  • Recently, solar power plants have spread widely as part of the transition to greater environmental protection and renewable energy. Therefore, regular solar plant inspection is necessary to efficiently manage solar-light modules. This study implemented a test that can detect solar-light module faults using an UAV based thermal infrared camera and GIS spatial analysis. First, images were taken using fixed UAV and an RGB camera, then orthomosaic images were created using Pix4D SW. We constructed solar-light module layers from the orthomosaic images and inputted the module layer code. Rubber covers were installed in the solar-light module to detect solar-light module faults. The mean temperature of each solar-light module can be calculated using the Zonalmean function based on temperature information from the UAV thermal camera and solar-light module layer. Finally, locations of solar-light modules of more than $37^{\circ}C$ and those with rubber covers can be extracted automatically using GIS spatial analysis and analyzed specifically using the solar-light module's identifying code.