• Title/Summary/Keyword: RGB sensor

Search Result 144, Processing Time 0.037 seconds

A Fundamental Study on Detection of Weeds in Paddy Field using Spectrophotometric Analysis (분광특성 분석에 의한 논 잡초 검출의 기초연구)

  • 서규현;서상룡;성제훈
    • Journal of Biosystems Engineering
    • /
    • v.27 no.2
    • /
    • pp.133-142
    • /
    • 2002
  • This is a fundamental study to develop a sensor to detect weeds in paddy field using machine vision adopted spectralphotometric technique in order to use the sensor to spread herbicide selectively. A set of spectral reflectance data was collected from dry and wet soil and leaves of rice and 6 kinds of weed to select desirable wavelengths to classify soil, rice and weeds. Stepwise variable selection method of discriminant analysis was applied to the data set and wavelengths of 680 and 802 m were selected to distinguish plants (including rice and weeds) from dry and wet soil, respectively. And wavelengths of 580 and 680 nm were selected to classify rice and weeds by the same method. Validity of the wavelengths to distinguish the plants from soil was tested by cross-validation test with built discriminant function to prove that all of soil and plants were classified correctly without any failure. Validity of the wavelengths for classification of rice and weeds was tested by the same method and the test resulted that 98% of rice and 83% of weeds were classified correctly. Feasibility of CCD color camera to detect weeds in paddy field was tested with the spectral reflectance data by the same statistical method as above. Central wavelengths of RGB frame of color camera were tried as tile effective wavelengths to distingush plants from soil and weeds from plants. The trial resulted that 100% and 94% of plants in dry soil and wet soil, respectively, were classified correctly by the central wavelength or R frame only, and 95% of rice and 85% of weeds were classified correctly by the central wavelengths of RGB frames. As a result, it was concluded that CCD color camera has good potential to be used to detect weeds in paddy field.

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

Properties of Photo Detector using SOI NMOSFET (SOI NMOSFET을 이용한 Photo Detector의 특성)

  • 김종준;정두연;이종호;오환술
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.15 no.7
    • /
    • pp.583-590
    • /
    • 2002
  • In this paper, a new Silicon on Insulator (SOI)-based photodetector was proposed, and its basic operation principle was explained. Fabrication steps of the detector are compatible with those of conventional SOI CMOS technology. With the proposed structure, RGB (Read, Green, Blue) which are three primary colors of light can be realized without using any organic color filters. It was shown that the characteristics of the SOI-based detector are better than those of bulk-based detector. To see the response characteristics to the green (G) among RGB, SOI and bulk NMOSFETS were fabricated using $1.5\mu m$ CMOS technology and characterized. We obtained optimum optical response characteristics at $V_{GS}=0.35 V$ in NMOSFET with threshold voltage of 0.72 V. Drain bias should be less than about 1.5 V to avoid any problem from floating body effect, since the body of the SOI NMOSFET was floated. The SOI and the bulk NMOSFETS shown maximum drain currents at the wavelengths of incident light around 550 nm and 750 nm, respectively. Therefore the SOI detector is more suitable for the G color detector.

Image compression using K-mean clustering algorithm

  • Munshi, Amani;Alshehri, Asma;Alharbi, Bayan;AlGhamdi, Eman;Banajjar, Esraa;Albogami, Meznah;Alshanbari, Hanan S.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.275-280
    • /
    • 2021
  • With the development of communication networks, the processes of exchanging and transmitting information rapidly developed. As millions of images are sent via social media every day, also wireless sensor networks are now used in all applications to capture images such as those used in traffic lights, roads and malls. Therefore, there is a need to reduce the size of these images while maintaining an acceptable degree of quality. In this paper, we use Python software to apply K-mean Clustering algorithm to compress RGB images. The PSNR, MSE, and SSIM are utilized to measure the image quality after image compression. The results of compression reduced the image size to nearly half the size of the original images using k = 64. In the SSIM measure, the higher the K, the greater the similarity between the two images which is a good indicator to a significant reduction in image size. Our proposed compression technique powered by the K-Mean clustering algorithm is useful for compressing images and reducing the size of images.

Development of a Single-Arm Robotic System for Unloading Boxes in Cargo Truck (간선화물의 상자 하차를 위한 외팔 로봇 시스템 개발)

  • Jung, Eui-Jung;Park, Sungho;Kang, Jin Kyu;Son, So Eun;Cho, Gun Rae;Lee, Youngho
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.417-424
    • /
    • 2022
  • In this paper, the developed trunk cargo unloading automation system is introduced, and the RGB-D sensor-based box loading situation recognition method and unloading plan applied to this system are suggested. First of all, it is necessary to recognize the position of the box in a truck. To do this, we first apply CNN-based YOLO, which can recognize objects in RGB images in real-time. Then, the normal vector of the center of the box is obtained using the depth image to reduce misrecognition in parts other than the box, and the inner wall of the truck in an image is removed. And a method of classifying the layers of the boxes according to the distance using the recognized depth information of the boxes is suggested. Given the coordinates of the boxes on the nearest layer, a method of generating the optimal path to take out the boxes the fastest using this information is introduced. In addition, kinematic analysis is performed to move the conveyor to the position of the box to be taken out of the truck, and kinematic analysis is also performed to control the robot arm that takes out the boxes. Finally, the effectiveness of the developed system and algorithm through a test bed is proved.

Fabrication of High-Performance Colorimetric Fiber-Type Sensors for Hydrogen Sulfide Detection (황화수소 가스 감지를 위한 고성능 변색성 섬유형 센서의 제작 및 개발)

  • Jeong, Dong Hyuk;Maeng, Bohee;Lee, Junyeop;Cho, Sung Been;An, Hee Kyung;Jung, Daewoong
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.3
    • /
    • pp.168-174
    • /
    • 2022
  • Hydrogen sulfide(H2S) gas is a high-risk gas that can cause suffocation or death in severe cases, depending on the concentration of exposure. Various studies to detect this gas are still in progress. In this study, we demonstrate a colorimetric sensor that can detect H2S gas using its direct color change. The proposed nanofiber sensor containing a dye material named Lead(II) acetate, which changes its color according to H2S gas reaction, is fabricated by electrospinning. The performance of this sensor is evaluated by measuring RGB changes, ΔE value, and gas selectivity. It has a ΔE value of 5.75 × 10-3 ΔE/s·ppm, showing improved sensitivity up to 1.4 times that of the existing H2S color change detection sensor, which is a result of the large surface area of the nanofibers. The selectivity for H2S gas is confirmed to be an excellent value of almost 70 %.

A Study on Integrated Fire Alarm System for Safe Urban Transit (안전한 도시철도를 위한 통합 화재 경보 시스템 구축의 연구)

  • Chang, Il-Sik;Ahn, Tae-Ki;Jeon, Ji-Hye;Cho, Byung-Mok;Park, Goo-Man
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.768-773
    • /
    • 2011
  • Today's urban transit system is regarded as the important public transportation service which saves passengers' time and provides the safety. Many researches focus on the rapid and protective responses that minimize the losses when dangerous situation occurs. In this paper we proposed the early fire detection and corresponding rapid response method in urban transit system by combining automatic fire detection for video input and the sensor system. The fire detection method consists of two parts, spark detection and smoke detection. At the spark detection, the RGB color of input video is converted into HSV color and the frame difference is obtained in temporal direction. The region with high R values is considered as fire region candidate and stepwise fire detection rule is applied to calculate its size. At the smoke detection stage, we used the smoke sensor network to secure the credibility of spark detection. The proposed system can be implemented at low prices. In the future work, we would improve the detection algorithm and the accuracy of sensor location in the network.

  • PDF