• Title/Summary/Keyword: omni-directional image

Search Result 54, Processing Time 0.037 seconds

An Object Tracking System Using an Omni-Directional Camera (전방위 카메라를 이용한 객체 추적 시스템)

  • Kim, Jin-Hwan;Ahn, Jae-Kyun;Kim, Chang-Su
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.781-782
    • /
    • 2008
  • An object tracking system, which uses an omni-directional camera, is proposed in this work. First, we construct a mapping table, which describes the relationships between image coordinates and omni-directional angles Then, we develop a surveillance system to detect unexpected objects automatically from omni-directional images. Finally, we generate perspective views for detected objects by using the mapping table. Simulation results demonstrate that the proposed algorithm provides efficient performances.

  • PDF

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Blind Digital Watermarking Methods for Omni-directional Panorama Images using Feature Points (특징점을 이용한 전방위 파노라마 영상의 블라인드 디지털 워터마킹 방법)

  • Kang, I-Seul;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.785-799
    • /
    • 2017
  • One of the most widely used image media in recent years, omni-directional panorama images are attracting much attention. Since this image is ultra-high value-added, the intellectual property of this image must be protected. In this paper, we propose a blind digital watermarking method for this image. In this paper, we assume that the owner of each original image may be different, insert different watermark data into each original image, and extract the watermark from the projected image, which is a form of service of omni- directional panorama image. Therefore, the main target attack in this paper is the image distortion which occurs in the process of the omni- directional panorama image. In this method, SIFT feature points of non-stitched areas are used, and watermark data is inserted into data around each feature point. We propose two methods of using two-dimensional DWT coefficients and spatial domain data as data for inserting watermark. Both methods insert watermark data by QIM method. Through experiments, these two methods show robustness against the distortion generated in the panorama image generation process, and additionally show sufficient robustness against JPEG compression attack.

Georeferencing of Indoor Omni-Directional Images Acquired by a Rotating Line Camera (회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱)

  • Oh, So-Jung;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.211-221
    • /
    • 2012
  • To utilize omni-directional images acquired by a rotating line camera for indoor spatial information services, we should register precisely the images with respect to an indoor coordinate system. In this study, we thus develop a georeferencing method to estimate the exterior orientation parameters of an omni-directional image - the position and attitude of the camera at the acquisition time. First, we derive the collinearity equations for the omni-directional image by geometrically modeling the rotating line camera. We then estimate the exterior orientation parameters using the collinearity equations with indoor control points. The experimental results from the application to real data indicate that the exterior orientation parameters is estimated with the precision of 1.4 mm and $0.05^{\circ}$ for the position and attitude, respectively. The residuals are within 3 and 10 pixels in horizontal and vertical directions, respectively. Particularly, the residuals in the vertical direction retain systematic errors mainly due to the lens distortion, which should be eliminated through a camera calibration process. Using omni-directional images georeferenced precisely with the proposed method, we can generate high resolution indoor 3D models and sophisticated augmented reality services based on the models.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Control of an Omni-directional Mobile Robot Based on Camera Image (카메라 영상기반 전방향 이동 로봇의 제어)

  • Kim, Bong Kyu;Ryoo, Jung Rae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.84-89
    • /
    • 2014
  • In this paper, an image-based visual servo control strategy for tracking a target object is applied to a camera-mounted omni-directional mobile robot. In order to get target angular velocity of each wheel from image coordinates of the target object, in general, a mathematical image Jacobian matrix is built using a camera model and a mobile robot kinematics. Unlike to the well-known mathematical image Jacobian, a simple rule-based control strategy is proposed to generate target angular velocities of the wheels in conjunction with size of the target object captured in a camera image. A camera image is divided into several regions, and a pre-defined rule corresponding to the target-located image region is applied to generate target angular velocities of wheels. The proposed algorithm is easily implementable in that no mathematical description for image Jacobian is required and a small number of rules are sufficient for target tracking. Experimental results are presented with descriptions about the overall experimental system.

Active omni-directional range sensor for mobile robot navigation (이동 로봇의 자율주행을 위한 전방향 능동거리 센서)

  • 정인수;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.824-827
    • /
    • 1996
  • Most autonomous mobile robots view things only in front of them. As a result, they may collide against objects moving from the side or behind. To overcome the problem we have built an Active Omni-directional Range Sensor that can obtain omnidirectional depth data by a laser conic plane and a conic mirror. In the navigation of the mobile robot, the proposed sensor system makes a laser conic plane by rotating the laser point source at high speed and achieves two dimensional depth map, in real time, once an image capture. The experimental results show that the proposed sensor system provides the best potential for navigation of the mobile robot in uncertain environment.

  • PDF

Autonomous Omni-Directional Cleaning Robot System Design

  • Choi, Jun-Yong;Ock, Seung-Ho;Kim, San;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2019-2023
    • /
    • 2005
  • In this paper, an autonomous omni directional cleaning robot which recognizes an obstacle and a battery charger is introduced. It utilizes a robot vision, ultra sonic sensors, and infrared sensors information along with appropriate algorithm. Three omni-directional wheels make the robot move any direction, enabling a faster maneuvering than a simple track typed robot. The robot system transfers command and image data through Blue-tooth wireless modules to be operated in a remote place. The robot vision associated with sensor data makes the robot proceed in an autonomous behavior. An autonomous battery charger searching is implemented by using a map-building which results in overcoming the error due to the slip on the wheels, and camera and sensor information.

  • PDF

Multi-views face detection in Omni-directional camera for non-intrusive iris recognition (비강압적 홍채 인식을 위한 전 방향 카메라에서의 다각도 얼굴 검출)

  • 이현수;배광혁;김재희;박강령
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.115-118
    • /
    • 2003
  • This paper describes a system of detecting multi-views faces and estimating their face poses in an omni-directional camera environment for non-intrusive iris recognition. The paper is divided into two parts; First, moving region is identified by using difference-image information. Then this region is analyzed with face-color information to find the face candidate region. Second part is applying PCA (Principal Component Analysis) to detect multi-view faces, to estimate face pose.

  • PDF

The navigation method of mobile robot using a omni-directional position detection system (전방향 위치검출 시스템을 이용한 이동로봇의 주행방법)

  • Ryu, Ji-Hyoung;Kim, Jee-Hong;Lee, Chang-Goo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.2
    • /
    • pp.237-242
    • /
    • 2009
  • Comparing with fixed-type Robots, Mobile Robots have the advantage of extending their workspaces. But this advantage need some sensors to detect mobile robot's position and find their goal point. This article describe the navigation teaching method of mobile robot using omni-directional position detection system. This system offers the brief position data to a processor with simple devices. In other words, when user points a goal point, this system revise the error by comparing its heading angle and position with the goal. For these processes, this system use a conic mirror and a single camera. As a result, this system reduce the image processing time to search the target for mobile robot navigation ordered by user.