• Title/Summary/Keyword: Camera localization

Search Result 200, Processing Time 0.02 seconds

Quickly Map Renewal through IPM-based Image Matching with High-Definition Map (IPM 기반 정밀도로지도 매칭을 통한 지도 신속 갱신 방법)

  • Kim, Duk-Jung;Lee, Won-Jong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1163-1175
    • /
    • 2021
  • In autonomous driving, road markings are an essential element for object tracking, path planning and they are able to provide important information for localization. This paper presents an approach to update and measure road surface markers with HD maps as well as matching using inverse perspective mapping. The IPM removes perspective effects from the vehicle's front camera image and remaps them to the 2D domain to create a bird-view region to fit with HD map regions. In addition, letters and arrows such as stop lines, crosswalks, dotted lines, and straight lines are recognized and compared to objects on the HD map to determine whether they are updated. The localization of a newly installed object can be obtained by referring to the measurement value of the surrounding object on the HD map. Therefore, we are able to obtain high accuracy update results with very low computational costs and low-cost cameras and GNSS/INS sensors alone.

The navigation method of mobile robot using a omni-directional position detection system (전방향 위치검출 시스템을 이용한 이동로봇의 주행방법)

  • Ryu, Ji-Hyoung;Kim, Jee-Hong;Lee, Chang-Goo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.2
    • /
    • pp.237-242
    • /
    • 2009
  • Comparing with fixed-type Robots, Mobile Robots have the advantage of extending their workspaces. But this advantage need some sensors to detect mobile robot's position and find their goal point. This article describe the navigation teaching method of mobile robot using omni-directional position detection system. This system offers the brief position data to a processor with simple devices. In other words, when user points a goal point, this system revise the error by comparing its heading angle and position with the goal. For these processes, this system use a conic mirror and a single camera. As a result, this system reduce the image processing time to search the target for mobile robot navigation ordered by user.

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.

Nonlinear model for estimating depth map of haze removal (안개제거의 깊이 맵 추정을 위한 비선형 모델)

  • Lee, Seungmin;Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.492-496
    • /
    • 2020
  • The visibility deteriorates in hazy weather and it is difficult to accurately recognize information captured by the camera. Research is being actively conducted to remove haze so that camera-based applications such as object localization/detection and lane recognition can operate normally even in hazy weather. In this paper, we propose a nonlinear model for depth map estimation through an extensive analysis that the difference between brightness and saturation in hazy image increases non-linearly with the depth of the image. The quantitative evaluation(MSE, SSIM, TMQI) shows that the proposed haze removal method based on the nonlinear model is superior to other state-of-the-art methods.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Tracking of Walking Human Based on Position Uncertainty of Dynamic Vision Sensor of Quadcopter UAV (UAV기반 동적영상센서의 위치불확실성을 통한 보행자 추정)

  • Lee, Junghyun;Jin, Taeseok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.24-30
    • /
    • 2016
  • The accuracy of small and low-cost CCD cameras is insufficient to provide data for precisely tracking unmanned aerial vehicles (UAVs). This study shows how a quad rotor UAV can hover on a human targeted tracking object by using data from a CCD camera rather than imprecise GPS data. To realize this, quadcopter UAVs need to recognize their position and posture in known environments as well as unknown environments. Moreover, it is necessary for their localization to occur naturally. It is desirable for UAVs to estimate their position by solving uncertainty for quadcopter UAV hovering, as this is one of the most important problems. In this paper, we describe a method for determining the altitude of a quadcopter UAV using image information of a moving object like a walking human. This method combines the observed position from GPS sensors and the estimated position from images captured by a fixed camera to localize a UAV. Using the a priori known path of a quadcopter UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated quadcopter UAV's altitude. Since the equations are based on the geometric constraint equation, measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the quadcopter UAV. The Kalman filter scheme is applied for this method. Its performance is verified by a computer simulation and experiments.

Development of a real-time crop recognition system using a stereo camera

  • Baek, Seung-Min;Kim, Wan-Soo;Kim, Yong-Joo;Chung, Sun-Ok;Nam, Kyu-Chul;Lee, Dae Hyun
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.315-326
    • /
    • 2020
  • In this study, a real-time crop recognition system was developed for an unmanned farm machine for upland farming. The crop recognition system was developed based on a stereo camera, and an image processing framework was proposed that consists of disparity matching, localization of crop area, and estimation of crop height with coordinate transformations. The performance was evaluated by attaching the crop recognition system to a tractor for five representative crops (cabbage, potato, sesame, radish, and soybean). The test condition was set at 3 levels of distances to the crop (100, 150, and 200 cm) and 5 levels of camera height (42, 44, 46, 48, and 50 cm). The mean relative error (MRE) was used to compare the height between the measured and estimated results. As a result, the MRE of Chinese cabbage was the lowest at 1.70%, and the MRE of soybean was the highest at 4.97%. It is considered that the MRE of the crop which has more similar distribution lower. the results showed that all crop height was estimated with less than 5% MRE. The developed crop recognition system can be applied to various agricultural machinery which enhances the accuracy of crop detection and its performance in various illumination conditions.

Object Detection of AGV in Manufacturing Plants using Deep Learning (딥러닝 기반 제조 공장 내 AGV 객체 인식에 대한 연구)

  • Lee, Gil-Won;Lee, Hwally;Cheong, Hee-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-43
    • /
    • 2021
  • In this research, the accuracy of YOLO v3 algorithm in object detection during AGV (Automated Guided Vehicle) operation was investigated. First of all, AGV with 2D LiDAR and stereo camera was prepared. AGV was driven along the route scanned with SLAM (Simultaneous Localization and Mapping) using 2D LiDAR while front objects were detected through stereo camera. In order to evaluate the accuracy of YOLO v3 algorithm, recall, AP (Average Precision), and mAP (mean Average Precision) of the algorithm were measured with a degree of machine learning. Experimental results show that mAP, precision, and recall are improved by 10%, 6.8%, and 16.4%, respectively, when YOLO v3 is fitted with 4000 training dataset and 500 testing dataset which were collected through online search and is trained additionally with 1200 dataset collected from the stereo camera on AGV.

Development of simultaneous multi-channel data acquisition system for large-area Compton camera (LACC)

  • Junyoung Lee;Youngmo Ku;Sehoon Choi;Goeun Lee ;Taehyeon Eom ;Hyun Su Lee ;Jae Hyeon Kim ;Chan Hyeong Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3822-3830
    • /
    • 2023
  • The large-area Compton camera (LACC), featuring significantly high detection sensitivity, was developed for high-speed localization of gamma-ray sources. Due to the high gamma-ray interaction event rate induced by the high sensitivity, however, the multiplexer-based data acquisition system (DAQ) rapidly saturated, leading to deteriorated energy and imaging resolution at event rates higher than 4.7 × 103 s-1. In the present study, a new simultaneous multi-channel DAQ was developed to improve the energy and imaging resolution of the LACC even under high event rate conditions (104-106 s-1). The performance of the DAQ was evaluated with several point sources under different event rate conditions. The results indicated that the new DAQ offers significantly better performance than the existing DAQ over the entire energy and event rate ranges. Especially, the new DAQ showed high energy resolution under very high event rate conditions, i.e., 6.9% and 8.6% (for 662 keV) at 1.3 × 105 and 1.2 × 106 s-1, respectively. Furthermore, the new DAQ successfully acquired Compton images under those event rates, i.e., imaging resolutions of 13.8° and 19.3° at 8.7 × 104 and 106 s-1, which correspond to 1.8 and 73 μSv/hr or about 18 and 730 times the background level, respectively.

Single-Port Thoracic Surgery: A New Direction

  • Ng, Calvin S.H.
    • Journal of Chest Surgery
    • /
    • v.47 no.4
    • /
    • pp.327-332
    • /
    • 2014
  • Single-port video-assisted thoracic surgery (VATS) has slowly established itself as an alternate surgical approach for the treatment of an increasingly wide range of thoracic conditions. The potential benefits of fewer surgical incisions, better cosmesis, and less postoperative pain and paraesthesia have led to the technique's popularity worldwide. The limited single small incision through which the surgeon has to operate poses challenges that are slowly being addressed by improvements in instrument design. Of note, instruments and video-camera systems that are narrower and angulated have made single-port VATS major lung resection easier to perform and learn. In the future, we may see the development of subcostal or embryonic natural orifice translumenal endoscopic surgery access, evolution in anaesthesia strategies, and cross-discipline imaging-assisted lesion localization for single-port VATS procedures.