• Title/Summary/Keyword: Vision Based Sensor

Search Result 425, Processing Time 0.024 seconds

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image (3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정)

  • Jung, Tae-Ki;Song, Jong-Hwa;Im, Jun-Hyuck;Lee, Byung-Hyun;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.12
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.

Development of monocular video deflectometer based on inclination sensors

  • Wang, Shuo;Zhang, Shuiqiang;Li, Xiaodong;Zou, Yu;Zhang, Dongsheng
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.607-616
    • /
    • 2019
  • The video deflectometer based on digital image correlation is a non-contacting optical measurement method which has become a useful tool for characterization of the vertical deflections of large structures. In this study, a novel imaging model has been established which considers the variations of pitch angles in the full image. The new model allows deflection measurement at a wide working distance with high accuracy. A monocular video deflectometer has been accordingly developed with an inclination sensor, which facilitates dynamic determination of the orientations and rotation of the optical axis of the camera. This layout has advantages over the video deflectometers based on theodolites with respect to convenience. Experiments have been presented to show the accuracy of the new imaging model and the performance of the monocular video deflectometer in outdoor applications. Finally, this equipment has been applied to the measurement of the vertical deflection of Yingwuzhou Yangtze River Bridge in real time at a distance of hundreds of meters. The results show good agreement with the embedded GPS outputs.

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

Leveraging Deep Learning and Farmland Fertility Algorithm for Automated Rice Pest Detection and Classification Model

  • Hussain. A;Balaji Srikaanth. P
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.959-979
    • /
    • 2024
  • Rice pest identification is essential in modern agriculture for the health of rice crops. As global rice consumption rises, yields and quality must be maintained. Various methodologies were employed to identify pests, encompassing sensor-based technologies, deep learning, and remote sensing models. Visual inspection by professionals and farmers remains essential, but integrating technology such as satellites, IoT-based sensors, and drones enhances efficiency and accuracy. A computer vision system processes images to detect pests automatically. It gives real-time data for proactive and targeted pest management. With this motive in mind, this research provides a novel farmland fertility algorithm with a deep learning-based automated rice pest detection and classification (FFADL-ARPDC) technique. The FFADL-ARPDC approach classifies rice pests from rice plant images. Before processing, FFADL-ARPDC removes noise and enhances contrast using bilateral filtering (BF). Additionally, rice crop images are processed using the NASNetLarge deep learning architecture to extract image features. The FFA is used for hyperparameter tweaking to optimise the model performance of the NASNetLarge, which aids in enhancing classification performance. Using an Elman recurrent neural network (ERNN), the model accurately categorises 14 types of pests. The FFADL-ARPDC approach is thoroughly evaluated using a benchmark dataset available in the public repository. With an accuracy of 97.58, the FFADL-ARPDC model exceeds existing pest detection methods.

Pose Calibration of Inertial Measurement Units on Joint-Constrained Rigid Bodies (관절체에 고정된 관성 센서의 위치 및 자세 보정 기법)

  • Kim, Sinyoung;Kim, Hyejin;Lee, Sung-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.19 no.4
    • /
    • pp.13-22
    • /
    • 2013
  • A motion capture system is widely used in movies, computer game, and computer animation industries because it allows for creating realistic human motions efficiently. The inertial motion capture system has several advantages over more popular vision-based systems in terms of the required space and cost. However, it suffers from low accuracy due to the relatively high noise levels of the inertial sensors. In particular, the accelerometer used for measuring gravity direction loses the accuracy when the sensor is moving with non-zero linear acceleration. In this paper, we propose a method to remove the linear acceleration component from the accelerometer data in order to improve the accuracy of measuring gravity direction. In addition, we develop a simple method to calibrate the joint axis of a link to which an inertial sensor belongs as well as the position of a sensor with respect to the link. The calibration enables attaching inertial sensors in an arbitrary position and orientation with respect to a link.

Multi-sensor Intelligent Robot (멀티센서 스마트 로보트)

  • Jang, Jong-Hwan;Kim, Yong-Ho
    • The Journal of Natural Sciences
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 1992
  • A robotically assisted field material handling system designed for loading and unloading of a planar pallet with a forklift in unstructured field environment is presented. The system uses combined acoustic/visual sensing data to define the position/orientation of the pallet and to determine the specific locations of the two slots of the pallet, so that the forklift can move close to the slot and engage it for transport. In order to reduce the complexity of the material handling operation, we have developed a method based on the integration of 2-D range data of Poraloid ultrasonic sensor along with 2-D visual data of an optical camera. Data obtained from the two separate sources complements each other and is used in an efficient algorithm to control this robotically assisted field material handling system . Range data obtained from two linear scannings is used to determine the pan and tilt angles of a pallet using least mean square method. Then 2-D visual data is used to determine the swing angle and engagement location of a pallet by using edge detection and Hough transform techniques. The limitations of the pan and tilt orientation to be determined arc discussed. The system developed is evaluated through the hardware and software implementation. The experimental results are presented.

  • PDF

Pose Estimation Method Using Sensor Fusion based on Extended Kalman Filter (센서 결합을 이용한 확장 칼만 필터 기반 자세 추정 방법)

  • Yun, Inyong;Shim, Jaeryong;Kim, Joongkyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.106-114
    • /
    • 2017
  • In this paper, we propose the method of designing an extended kalman filter in order to accurately measure the position of the spatial-phase system using sensor fusion. We use the quaternion as a state variable in expressing the attitude of an object. Then, the attitude of rigid body can be calculated from the accelerometer and magnetometer by applying the Gauss-Newton method. We estimate the changes of state by using the measurements obtained from the gyroscope, the quaternion, and the vision informations by ARVR_SDK. To increase the accuracy of estimation, we designed and implemented the extended kalman filter, which showed excellent ability to adjust and compensate the sensor error. As a result, we could experimentally demonstrate that the reliability of the attitude estimation value can be significantly increased.

A Method for Eliminating Aiming Error of Unguided Anti-Tank Rocket Using Improved Target Tracking (향상된 표적 추적 기법을 이용한 무유도 대전차 로켓의 조준 오차 제거 방법)

  • Song, Jin-Mo;Kim, Tae-Wan;Park, Tai-Sun;Do, Joo-Cheol;Bae, Jong-sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.47-60
    • /
    • 2018
  • In this paper, we proposed a method for eliminating aiming error of unguided anti-tank rocket using improved target tracking. Since predicted fire is necessary to hit moving targets with unguided rockets, a method was proposed to estimate the position and velocity of target using fire control system. However, such a method has a problem that the hit rate may be lowered due to the aiming error of the shooter. In order to solve this problem, we used an image-based target tracking method to correct error caused by the shooter. We also proposed a robust tracking method based on TLD(Tracking Learning Detection) considering characteristics of the FCS(Fire Control System) devices. To verify the performance of our proposed algorithm, we measured the target velocity using GPS and compared it with our estimation. It is proved that our method is robust to shooter's aiming error.

Development of a Backpack-Based Wearable Proximity Detection System

  • Shin, Hyungsub;Chang, Seokhee;Yu, Namgyenong;Jeong, Chaeeun;Xi, Wen;Bae, Jihyun
    • Fashion & Textile Research Journal
    • /
    • v.24 no.5
    • /
    • pp.647-654
    • /
    • 2022
  • Wearable devices come in a variety of shapes and sizes in numerous fields in numerous fields and are available in various forms. They can be integrated into clothing, gloves, hats, glasses, and bags and used in healthcare, the medical field, and machine interfaces. These devices keep track individuals' biological and behavioral data to help with health communication and are often used for injury prevention. Those with hearing loss or impaired vision find it more difficult to recognize an approaching person or object; these sensing devices are particularly useful for such individuals, as they assist them with injury prevention by alerting them to the presence of people or objects in their immediate vicinity. Despite these obvious preventive benefits to developing Internet of Things based devices for the disabled, the development of these devices has been sluggish thus far. In particular, when compared with people without disabilities, people with hearing impairment have a much higher probability of averting danger when they are able to notice it in advance. However, research and development remain severely underfunded. In this study, we incorporated a wearable detection system, which uses an infrared proximity sensor, into a backpack. This system helps its users recognize when someone is approaching from behind through visual and tactile notification, even if they have difficulty hearing or seeing the objects in their surroundings. Furthermore, this backpack could help prevent accidents for all users, particularly those with visual or hearing impairments.