• Title/Summary/Keyword: RGB camera

Search Result 316, Processing Time 0.029 seconds

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.11a
    • /
    • pp.207-217
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

Spectrum-Based Color Reproduction Algorithm for Makeup Simulation of 3D Facial Avatar

  • Jang, In-Su;Kim, Jae Woo;You, Ju-Yeon;Kim, Jin Seo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.969-979
    • /
    • 2013
  • Various simulation applications for hair, clothing, and makeup of a 3D avatar can provide more useful information to users before they select a hairstyle, clothes, or cosmetics. To enhance their reality, the shapes, textures, and colors of the avatars should be similar to those found in the real world. For a more realistic 3D avatar color reproduction, this paper proposes a spectrum-based color reproduction algorithm and color management process with respect to the implementation of the algorithm. First, a makeup color reproduction model is estimated by analyzing the measured spectral reflectance of the skin samples before and after applying the makeup. To implement the model for a makeup simulation system, the color management process controls all color information of the 3D facial avatar during the 3D scanning, modeling, and rendering stages. During 3D scanning with a multi-camera system, spectrum-based camera calibration and characterization are performed to estimate the spectrum data. During the virtual makeup process, the spectrum data of the 3D facial avatar is modified based on the makeup color reproduction model. Finally, during 3D rendering, the estimated spectrum is converted into RGB data through gamut mapping and display characterization.

Estimation trial for rice production by simulation model with unmanned air vehicle (UAV) in Sendai, Japan

  • Homma, Koki;Maki, Masayasu;Sasaki, Goshi;Kato, Mizuki
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.46-46
    • /
    • 2017
  • We developed a rice simulation model for remote-sensing (SIMRIW-RS, Homma et al., 2007) to evaluate rice production and management on a regional scale. Here, we reports its application trial to estimate rice production in farmers' fields in Sendai, Japan. The remote-sensing data for the application was periodically obtained by multispectral camera (RGB + NIR and RedEdge) attached with unmanned air vehicle (UAV). The airborne images was 8 cm in resolution which was attained by the flight at an altitude of 115 m. The remote-sensing data was relatively corresponded with leaf area index (LAI) of rice and its spatial and temporal variation, although the correspondences had some errors due to locational inaccuracy. Calibration of the simulation model depended on the first two remote-sensing data (obtained around one month after transplanting and panicle initiation) well predicted rice growth evaluated by the third remote-sensing data. The parameters obtained through the calibration may reflect soil fertility, and will be utilized for nutritional management. Although estimation accuracy has still needed to be improved, the rice yield was also well estimated. These results recommended further data accumulation and more accurate locational identification to improve the estimation accuracy.

  • PDF

Design of Interactive Teleprompter (인터렉티브 텔레프롬프터의 설계)

  • Park, Yuni;Park, Taejung
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.3
    • /
    • pp.43-51
    • /
    • 2016
  • This paper presents the concept of "interactive teleprompter", which provides the user with interaction with oneself or other users for live television broadcasts or smart mirrors. In such interactive applications, eye contacts between the user and the regenerated image or between the user and other persons are important in handling psychological processes or non-verbal communications. Unfortunately, it is not straightforward to address the eye contact issues with conventional combination of normal display and video camera. To address this problem, we propose an "interactive" teleprompter enhanced from conventional teleprompter devices. Our interactive teleprompter can recognize the user's gestures by applying infra-red (IR) depth sensor. This paper also presents test results for a beam splitter which plays a critical role for teleprompter and is designed to handle both visual light for RGB camera and IR for Depth sensor effectively.

Comparison with PMD depth camera and Kinect camera for Multi-View contents (다시점 콘텐츠 생성을 위한 PMD 카메라 및 Kinect 비교)

  • Song, Hyok;Choi, Byeong-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.240-241
    • /
    • 2011
  • 자연스러운 3D 실감영상을 감상하기 위해서는 많은 시점의 영상이 필요하며 과거 스테레오 디스플레이 장치로부터 최근 그 시점 수가 크게 늘어난 디스플레이 장치로 기술 발전이 이뤄지고 있으며 이에 따라 다시점 콘텐츠를 생성하기 위한 다양한 기술이 개발되어 있다. 다시점 콘텐츠를 생성하기 위하여 ToF 카메라 및 적외선 패턴을 이용한 방법이 주로 이용되고 있으며 이를 활용한 다시점 콘텐츠 생성을 하는 시도가 이뤄지고 있다. ToF 카메라는 PMD사의 제품 및 SwissRanger 사의 제품이 대표적이며 적외선 패턴을 이용한 방식은 MS사의 Kinect가 대표적이며 본 제품들을 활용한 기술 비교를 통하여 다시점 콘텐츠 생성의 결과 및 이를 비교한 장단점을 구분하였다. PMD사의 ToF 카메라는 두 개 이상의 광원을 사용하여 Depth 추출시에 Hole 영역의 크기가 작으나 ToF 영상의 해상도가 매우 작아 고화질의 콘텐츠를 생성하기 위하여 별도의 영상처리 알고리즘이 요구되었다. 반면 MS사의 Kinect는 Depth 영상의 해상도가 상대적으로 커서 영상처리 알고리즘의 복잡도가 작아지나 Depth 추출을 위한 카메라와 RGB 카메라의 위치가 공간적으로 떨어져 있어 이를 보정하기 위한 알고리즘이 요구되며 다시점 변환시 화질에 있어 상대적으로 떨어지는 것으로 나타났다.

  • PDF

Autofocus Tracking System Based on Digital Holographic Microscopy and Electrically Tunable Lens

  • Kim, Ju Wan;Lee, Byeong Ha
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.27-32
    • /
    • 2019
  • We present an autofocus tracking system implemented by the digital refocusing of digital holographic microscopy (DHM) and the tunability of an electrically tunable lens (ETL). Once the defocusing distance of an image is calculated with the DHM, then the focal plane of the imaging system is optically tuned so that it always gives a well-focused image regardless of the object location. The accuracy of the focus is evaluated by calculating the contrast of refocused images. The DHM is performed in an off-axis holographic configuration, and the ETL performs the focal plane tuning. With this proposed system, we can easily track down the object drifting along the depth direction without using any physical scanning. In addition, the proposed system can simultaneously obtain the digital hologram and the optical image by using the RGB channels of a color camera. In our experiment, the digital hologram is obtained by using the red channel and the optical image is obtained by the blue channel of the same camera at the same time. This technique is expected to find a good application in the long-term imaging of various floating cells.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.

Research for Calibration and Correction of Multi-Spectral Aerial Photographing System(PKNU 3) (다중분광 항공촬영 시스템(PKNU 3) 검정 및 보정에 관한 연구)

  • Lee, Eun Kyung;Choi, Chul Uong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.143-154
    • /
    • 2004
  • The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.

  • PDF

Pose Recognition of Soccer Players for Three Dimensional Animation (방송 축구 영상으로부터 3차원 애니메이션 변환을 위한 축구 선수 동작 인식)

  • 장원철;남시욱;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.33-36
    • /
    • 2000
  • To create a more realistic soccer game derived from TV images, we are developing an image synthesis system that generates 3D image sequence from TV images. We propose the method for the team and the pose recognition of players in TV images. The representation includes camera calibration method, team recognition method and pose recognition method. To find the location of a player on the field, a field model is constructed and a player's field position is transformed by 4-feature points. To recognize the team information of players, we compute RGB mean values and standard deviations of a player in TV images. Finally, to recognize pose of a player, this system computes the velocity and the ratio of player(height/width). Experimental results are included to evaluate the performance of the team and the pose recognition.

  • PDF