• 제목/요약/키워드: Fisheye Lens Cameras

검색결과 10건 처리시간 0.025초

어안렌즈 카메라로 획득한 영상에서 차량 인식을 위한 딥러닝 기반 객체 검출기 (Deep Learning based Object Detector for Vehicle Recognition on Images Acquired with Fisheye Lens Cameras)

  • ;연승호;김재민
    • 한국멀티미디어학회논문지
    • /
    • 제22권2호
    • /
    • pp.128-135
    • /
    • 2019
  • This paper presents a deep learning-based object detection method for recognizing vehicles in images acquired through cameras installed on ceiling of underground parking lot. First, we present an image enhancement method, which improves vehicle detection performance under dark lighting environment. Second, we present a new CNN-based multiscale classifiers for detecting vehicles in images acquired through cameras with fisheye lens. Experiments show that the presented vehicle detector has better performance than the conventional ones.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

고해상도 어안렌즈 영상에서 움직임기반의 표준 화각 ROI 검출기법 (Motion-based ROI Extraction with a Standard Angle-of-View from High Resolution Fisheye Image)

  • 류아침;한규필
    • 한국멀티미디어학회논문지
    • /
    • 제23권3호
    • /
    • pp.395-401
    • /
    • 2020
  • In this paper, a motion-based ROI extraction algorithm from a high resolution fisheye image is proposed for multi-view monitoring systems. Lately fisheye cameras are widely used because of the wide angle-of-view and they basically provide a lens correction functionality as well as various viewing modes. However, since the distortion-free angle of conventional algorithms is quite narrow due to the severe distortion ratio, there are lots of unintentional dead areas and they require much computation time in finding undistorted coordinates. Thus, the proposed algorithm adopts an image decimation and a motion detection methods, that can extract the undistorted ROI image with a standard angle-of-view for the fast and intelligent surveillance system. In addition, a mesh-type ROI is presented to reduce the lens correction time, so that this independent ROI scheme can parallelize and maximize the processor's utilization.

어안렌즈 카메라를 이용한 터널 모자이크 영상 제작 (Tunnel Mosaic Images Using Fisheye Lens Camera)

  • 김기홍;송영선;김백석
    • 대한공간정보학회지
    • /
    • 제17권1호
    • /
    • pp.105-111
    • /
    • 2009
  • 최근 각종 건설현장에서 최신측량기술을 이용한 다양한 정보 취득을 통해 시공성 및 안전성을 높이는 연구가 활발히 진행되고 있다. 디지털 영상은 영상 취득이 간편할 뿐만 아니라 영상으로부터 다양한 정보를 취득할 수 있기 때문에 최근 영상처리기술의 발전과 더불어 많은 활용성이 기대되고 있다. 본 연구에서는 터널과 같은 지하공간의 경우 일반렌즈로는 영상 촬영이 어려운 단점을 극복하기 위하여 어안렌즈를 활용한 영상 취득을 제안하였으며, 터널벽면을 모자이크 영상으로 매핑하는 프로그램을 개발하였다. 터널벽면에 대한 모자이크영상은 터널현장에서 절리면의 검출과 콘크리트 라이닝의 균열, 누수, 백화, 박리 등의 이상을 검측하고 분석하는데 활용될 수 있다.

  • PDF

차량용 어안렌즈영상의 기하학적 왜곡 보정 (Geometric Correction of Vehicle Fish-eye Lens Images)

  • 김성희;조영주;손진우;이중렬;김명희
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.601-605
    • /
    • 2009
  • $180^{\circ}$ 이상의 영역을 획득하는 어안렌즈(fish-eye lens)는 최소의 카메라로 최대 시야각을 확보할 수 있는 장점으로 인해 차량 장착 시도가 늘고 있다. 운전자에게 현실감 있는 영상을 제공하고 센서로 이용하기 위해서는 캘리브레이션을 통해 방사왜곡(radial distortion)에 따른 기하학적인 왜곡 보정이 필요하다. 그런데 차량용 어안렌즈의 경우, 대각선 어안렌즈로 일반 원상 어안렌즈로 촬영한 둥근 화상의 바깥둘레에 내접하는 부분을 잘라낸 직사각형 영상과 같으며, 수직, 수평 화각에 따라 왜곡이 비대칭구조로 설계되었다. 본 논문에서는, 영상의 특징점(feature points)을 이용하여 차량용 어안렌즈에 적합한 카메라 모델 및 캘리브레이션 기법을 소개한다. 캘리브레이션한 결과, 제안한 방법은 화각이 다른 차량용 어안렌즈에도 적용 가능하다.

  • PDF

지능형 주차 관제를 위한 실내주차장에서 실시간 차량 추적 및 영역 검출 (Realtime Vehicle Tracking and Region Detection in Indoor Parking Lot for Intelligent Parking Control)

  • 연승호;김재민
    • 한국멀티미디어학회논문지
    • /
    • 제19권2호
    • /
    • pp.418-427
    • /
    • 2016
  • A smart parking management requires to track a vehicle in a indoor parking lot and to detect the place where the vehicle is parked. An advanced parking system watches all space of the parking lot with CCTV cameras. We can use these cameras for vehicles tracking and detection. In order to cover a wide area with a camera, a fisheye lens is used. In this case the shape and size of an moving vehicle vary much with distance and angle to the camera. This makes vehicle detection and tracking difficult. In addition to the fisheye lens, the vehicle headlights also makes vehicle detection and tracking difficult. This paper describes a method of realtime vehicle detection and tracking robust to the harsh situation described above. In each image frame, we update the region of a vehicle and estimate the vehicle movement. First we approximate the shape of a car with a quadrangle and estimate the four sides of the car using multiple histograms of oriented gradient. Second we create a template by applying a distance transform to the car region and estimate the motion of the car with a template matching method.

로봇 응용을 위한 협력 및 결합 비전 시스템 (Mixing Collaborative and Hybrid Vision Devices for Robotic Applications)

  • 바쟝 정샬;김성흠;최동걸;이준영;권인소
    • 로봇학회논문지
    • /
    • 제6권3호
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.

Comparison the Mapping Accuracy of Construction Sites Using UAVs with Low-Cost Cameras

  • Jeong, Hohyun;Ahn, Hoyong;Shin, Dongyoon;Choi, Chuluong
    • 대한원격탐사학회지
    • /
    • 제35권1호
    • /
    • pp.1-13
    • /
    • 2019
  • The advent of a fourth industrial revolution, built on advances in digital technology, has coincided with studies using various unmanned aerial vehicles (UAVs) being performed worldwide. However, the accuracy of different sensors and their suitability for particular research studies are factors that need to be carefully evaluated. In this study, we evaluated UAV photogrammetry using smart technology. To assess the performance of digital photogrammetry, the accuracy of common procedures for generating orthomosaic images and digital surface models (DSMs) using terrestrial laser scanning (TLS) techniques was measured. Two different type of non-surveying camera(Smartphone camera, fisheye camera) were attached to UAV platform. For fisheye camera, lens distortion was corrected by considering characteristics of lens. Accuracy of orthoimage and DSM generated were comparatively analyzed using aerial and TLS data. Accuracy comparison analysis proceeded as follows. First, we used Ortho mosaic image to compare the check point with a certain area. In addition, vertical errors of camera DSM were compared and analyzed based on TLS. In this study, we propose and evaluate the feasibility of UAV photogrammetry which can acquire 3 - D spatial information at low cost in a construction site.

비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법 (Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens)

  • 조영주;김성희;박지영;손진우;이중렬;김명희
    • 한국시뮬레이션학회논문지
    • /
    • 제19권1호
    • /
    • pp.23-31
    • /
    • 2010
  • 180도 이상의 영역을 획득하는 어안렌즈(fisheye lens)는 최소의 카메라로 최대 시야각을 확보할 수 있는 장점으로 인해 차량 장착 시도가 늘고 있다. 이와 같이 어안렌즈를 통해 시야를 확보하고, 영상센서로 사용하기 위해서는 캘리브레이션 작업이 선행되어야 하며, 운전자에게 현실감 있는 영상을 제공하기 위해서는 이를 이용하여 방사왜곡(radial distortion)에 따른 기하학적인 왜곡 보정이 필요하다. 본 논문에서는 비대칭 왜곡을 가진 180도 이상 화각의 차량용 대각선 어안렌즈를 위해 영상 손실을 최소화하는 왜곡 보정 기법을 제안한다. 왜곡 보정은 왜곡 모델이 포함된 카메라 모델을 설정하고 캘리브레이션 과정을 통해 카메라 파라미터를 구한 후 왜곡이 보정된 뷰를 생성하는 과정으로 이루어진다. 먼저 왜곡모델로서 비선형의 왜곡 형상을 모방한 FOV(Field of View)모델을 사용한다. 또한 비대칭 왜곡렌즈의 경우 운전자의 좌우 시야각 확보에 중점을 두어 수직 화각보다 수평 화각이 크게 설계되었기 때문에 영상의 장축, 단축의 비율을 일치시킨 후 비선형 최적화 알고리즘을 사용하여 카메라 파라미터를 추정한다. 최종적으로 왜곡이 보정된 뷰 생성 시 역방향 사상과 함께 수평, 수직 방향에 대한 왜곡 보정 정도를 제어 가능하도록 함으로써 화각이 180도 이상인 영상에 대해서 핀홀 카메라 모델을 적용하여 2차원 평면으로 영상을 보정하는 경우 발생하는 영상 손실을 최소화하고 시각적 인지도를 높일 수 있도록 하였다.

어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피 (Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image)

  • 최윤원;최정원;임성규;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제22권3호
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.