• Title/Summary/Keyword: RGB camera

Search Result 312, Processing Time 0.027 seconds

3D Image Processing for Recognition and Size Estimation of the Fruit of Plum(Japanese Apricot) (3D 영상을 활용한 매실 인식 및 크기 추정)

  • Jang, Eun-Chae;Park, Seong-Jin;Park, Woo-Jun;Bae, Yeonghwan;Kim, Hyuck-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.130-139
    • /
    • 2021
  • In this study, size of the fruit of Japanese apricot (plum) was estimated through a plum recognition and size estimation program using 3D images in order to control the Eurytoma maslovskii that causes the most damage to plum in a timely manner. In 2018, night shooting was carried out using a Kinect 2.0 Camera. For night shooting in 2019, a RealSense Depth Camera D415 was used. Based on the acquired images, a plum recognition and estimation program consisting of four stages of image preprocessing, sizeable plum extraction, RGB and depth image matching and plum size estimation was implemented using MATLAB R2018a. The results obtained by running the program on 10 images produced an average plum recognition error rate of 61.9%, an average plum recognition error rate of 0.5% and an average size measurement error rate of 3.6%. The continued development of these plum recognition and size estimation programs is expected to enable accurate fruit size monitoring in the future and the development of timely control systems for Eurytoma maslovskii.

Development of a Reliable Real-time 3D Reconstruction System for Tiny Defects on Steel Surfaces (금속 표면 미세 결함에 대한 신뢰성 있는 실시간 3차원 형상 추출 시스템 개발)

  • Jang, Yu Jin;Lee, Joo Seob
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1061-1066
    • /
    • 2013
  • In the steel industry, the detection of tiny defects including its 3D characteristics on steel surfaces is very important from the point of view of quality control. A multi-spectral photometric stereo method is an attractive scheme because the shape of the defect can be obtained based on the images which are acquired at the same time by using a multi-channel camera. Moreover, the calculation time required for this scheme can be greatly reduced for real-time application with the aid of a GPU (Graphic Processing Unit). Although a more reliable shape reconstruction of defects can be possible when the numbers of available images are increased, it is not an easy task to construct a camera system which has more than 3 channels in the visible light range. In this paper, a new 6-channel camera system, which can distinguish the vertical/horizontal linearly polarized lights of RGB light sources, was developed by adopting two 3-CCD cameras and two polarized lenses based on the fact that the polarized light is preserved on the steel surface. The photometric stereo scheme with 6 images was accelerated by using a GPU, and the performance of the proposed system was validated through experiments.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Physical Function Monitoring Systems for Community-Dwelling Elderly Living Alone: A Comprehensive Review

  • Jo, Sungbae;Song, Changho
    • Physical Therapy Rehabilitation Science
    • /
    • v.11 no.1
    • /
    • pp.49-57
    • /
    • 2022
  • Objective: This study aims to conduct a comprehensive review of monitoring systems to monitor and manage physical function of community-dwelling elderly living alone and suggest future directions of unobtrusive monitoring. Design: Literature review Methods: The importance of health-related monitoring has been emphasized due to the aging population and novel corona virus (COVID-19) outbreak.As the population gets old and because of changes in culture, the number of single-person households among the elderly is expected to continue to increase. Elders are staying home longer and their physical function may decline rapidly,which can be a disturbing factorto successful aging.Therefore, systematic elderly management must be considered. Results: Frequently used technologies to monitor elders at home included red, green, blue (RGB) camera, accelerometer, passive infrared (PIR) sensor, wearable devices, and depth camera. Of them all, considering privacy concerns and easy-to-use features for elders, depth camera possibly can be a technology to be adapted at homes to unobtrusively monitor physical function of elderly living alone.The depth camera has been used to evaluate physical functions during rehabilitation and proven its efficiency. Conclusions: Therefore, physical monitoring system that is unobtrusive should be studied and developed in the future to monitor physical function of community-dwelling elderly living alone for the aging population.

Monitoring canopy phenology in a deciduous broadleaf forest using the Phenological Eyes Network (PEN)

  • Choi, Jeong-Pil;Kang, Sin-Kyu;Choi, Gwang-Yong;Nasahara, Kenlo Nishda;Motohka, Takeshi;Lim, Jong-Hwan
    • Journal of Ecology and Environment
    • /
    • v.34 no.2
    • /
    • pp.149-156
    • /
    • 2011
  • Phenological variables derived from remote sensing are useful in determining the seasonal cycles of ecosystems in a changing climate. Satellite remote sensing imagery is useful for the spatial continuous monitoring of vegetation phenology across broad regions; however, its applications are substantially constrained by atmospheric disturbances such as clouds, dusts, and aerosols. By way of contrast, a tower-based ground remote sensing approach at the canopy level can provide continuous information on canopy phenology at finer spatial and temporal scales, regardless of atmospheric conditions. In this study, a tower-based ground remote sensing system, called the "Phenological Eyes Network (PEN)", which was installed at the Gwangneung Deciduous KoFlux (GDK) flux tower site in Korea was introduced, and daily phenological progressions at the canopy level were assessed using ratios of red, green, and blue (RGB) spectral reflectances obtained by the PEN system. The PEN system at the GDK site consists of an automatic-capturing digital fisheye camera and a hemi-spherical spectroradiometer, and monitors stand canopy phenology on an hourly basis. RGB data analyses conducted between late March and early December in 2009 revealed that the 2G_RB (i.e., 2G - R - B) index was lower than the G/R (i.e., G divided by R) index during the off-growing season, owing to the effects of surface reflectance, including soil and snow effects. The results of comparisons between the daily PEN-obtained RGB ratios and daily moderate-resolution imaging spectroradiometer (MODIS)-driven vegetation indices demonstrate that ground remote sensing data, including the PEN data, can help to improve cloud-contaminated satellite remote sensing imagery.

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

Application of spectral image - Present and Promise -

  • Miyake, Yoichi
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1158-1159
    • /
    • 2009
  • Tri-stimulus values of CIE-XYZ and RGB values obtained by photographic film, CCD camera or scanner depend on the spectral sensitivity of imaging devices and the spectral radiant distribution of the illumination. It is important to record and reproduce the reflectance spectra of the object for true device independent color reproduction and high accurate recording of the scene. In this paper, a method to record the reflection spectra of the object is introduced and its application to spectral endoscopes is presented.

  • PDF

Implementation of camera synchronization for multi-view capturing system (다시점 촬영 시스템을 위한 카메라 동기화 구현)

  • Park, Jung Tak;Park, Byung Seo;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.268-269
    • /
    • 2021
  • 본 논문에서는 RGB이미지와 Depth 이미지를 촬영할 수 있는 촬영 장비인 Azure Kinect를 사용해 다시점 촬영 시스템 구성을 위한 카메라 동기화 시스템을 제안한다. 제안한 시스템에는 8대의 Azure Kinect 카메라를 사용하고 있으며 각 카메라는 3.5-mm 오디오 케이블로 연결되어 외부동기화 신호를 전달한다. 그리고 이미지를 저장할 때 발생하는 메모리에서의 병목현상을 최소화하기 위해 촬영 시스템의 동작을 16개의 버퍼로 나누어 병렬 컴퓨팅으로 진행한다. 이후 동기화 여부에 따른 차리를 디바이스 타임스탬프를 기준으로 하여 비교한다.

  • PDF

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Journal of Korean Port Research
    • /
    • v.14 no.3
    • /
    • pp.303-312
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are lane angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

A Study on the License Plate Recognition Using Color Information (Color Information을 이용한 자동차 번호판 영역 추출에 관한 연구)

  • 강승규;고형화
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.447-450
    • /
    • 2001
  • 자동차 번호판 인식 시스템은 크게 세 부분으로 나뉘어 질 수 있는데 그 첫 부분이 Camera를 통해서 획득된 영상에서 번호판 영역을 추출하는 것이다. 본 논문에서는 자가용과 영업용 번호판의 배경이 모두 다른 부분과 차이를 가지고 있다는 점을 이용하여 번호판 영역 추출을 위하여 기존의 방법과 달리 Color 정보를 이용하였다. Edge 검출이나 Gray level의 변화값을 이용하지 않고 Color 정보를 이용함으로써 번호판이 구부러진 영상이나 Noise를 통해서 훼손된 영상, Contrast가 낮은 영상에도 영역 추출에 강한 성능을 나타내었다. Camera를 통해서 획득된 RGB 영상을 YCbCr Format으로 바꾸고 그 중 Cb와 Cr 정보를 이용하여 번호판 영역을 검출하고 인증과정을 거쳐서 추출된 영상이 실제로 번호판 영상인지를 확인하는 단계를 거쳤다. 실험을 통하여 주간, 야간 및 훼손되거나 Noise가 많이 발생한 영상에서도 강한 성능을 나타냄을 볼 수 있었다.

  • PDF