• Title/Summary/Keyword: omni-directional image

Search Result 54, Processing Time 0.026 seconds

Tele-presence System using Homography-based Camera Tracking Method (호모그래피기반의 카메라 추적기술을 이용한 텔레프레즌스 시스템)

  • Kim, Tae-Hyub;Choi, Yoon-Seok;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.27-33
    • /
    • 2012
  • Tele-presence and tele-operation techniques are used to build up an immersive scene and control environment for the distant user. This paper presents a novel tele-presence system using the camera tracking based on planar homography. In the first step, the user wears the HMD(head mounted display) with the camera and his/her head motion is estimated. From the panoramic image by the omni-directional camera mounted on the mobile robot, a viewing image by the user is generated and displayed through HMD. The homography of 3D plane with markers is used to obtain the head motion of the user. For the performance evaluation, the camera tracking results by ARToolkit and the homography based method are compared with the really measured positions of the camera.

Real-time Human Detection under Omni-dir ectional Camera based on CNN with Unified Detection and AGMM for Visual Surveillance

  • Nguyen, Thanh Binh;Nguyen, Van Tuan;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1345-1360
    • /
    • 2016
  • In this paper, we propose a new real-time human detection under omni-directional cameras for visual surveillance purpose, based on CNN with unified detection and AGMM. Compared to CNN-based state-of-the-art object detection methods. YOLO model-based object detection method boasts of very fast object detection, but with less accuracy. The proposed method adapts the unified detecting CNN of YOLO model so as to be intensified by the additional foreground contextual information obtained from pre-stage AGMM. Increased computational time incurred by additional AGMM processing is compensated by speed-up gain obtained from utilizing 2-D input data consisting of grey-level image data and foreground context information instead of 3-D color input data. Through various experiments, it is shown that the proposed method performs better with respect to accuracy and more robust to environment changes than YOLO model-based human detection method, but with the similar processing speeds to that of YOLO model-based one. Thus, it can be successfully employed for embedded surveillance application.

Determination of 3D Object Coordinates from Overlapping Omni-directional Images Acquired by a Mobile Mapping System (모바일매핑시스템으로 취득한 중첩 전방위 영상으로부터 3차원 객체좌표의 결정)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.305-315
    • /
    • 2010
  • This research aims to develop a method to determine the 3D coordinates of an object point from overlapping omni-directional images acquired by a ground mobile mapping system and assess their accuracies. In the proposed method, we first define an individual coordinate system on each sensor and the object space and determine the geometric relationships between the systems. Based on these systems and their relationships, we derive a straight line of the corresponding object point candidates for a point of an omni-directional image, and determine the 3D coordinates of the object point by intersecting a pair of straight lines derived from a pair of matched points. We have compared the object coordinates determined through the proposed method with those measured by GPS and a total station for the accuracy assessment and analysis. According to the experimental results, with the appropriate length of baseline and mutual positions between cameras and objects, we can determine the relative coordinates of the object point with the accuracy of several centimeters. The accuracy of the absolute coordinates is ranged from several centimeters to 1 m due to systematic errors. In the future, we plan to improve the accuracy of absolute coordinates by determining more precisely the relationship between the camera and GPS/INS coordinates and performing the calibration of the omni-directional camera

A Study of Effective Method to Update the Database for Road Traffic Facilities Using Digital Image Processing and Pattern Recognition (수치영상처리 및 패턴 인식에 의한 도로교통시설물 DB의 효율적 갱신방안 연구)

  • Choi, Joon-Seog;Kang, Joon-Mook
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.2
    • /
    • pp.31-37
    • /
    • 2012
  • Because of road construction and expansion, Update of the road traffic facilities DB is steadily increased each year, and, Increasing drivers and cars, safety signs for traffic safety are required management and additional installation continuously. To update Safety Sign database promptly, we have developed auto recognition function of safety sign, and analyzed coordinates accuracy. The purpose of this study was to propose methods to update about road traffic facilities efficiently. For this purpose, omni-directional camera was calibrated for acquisition of 3-dimensional coordinates, integrated GPS/IMU/DMI system and applied image processing. In this experiment, we proposed a effective method to update database of road traffic facilities for digital map.

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

A Study on the Production Efficiency of Movie Filming Environment Using 360° VR (360VR을 활용한 영화촬영 환경을 위한 제작 효율성 연구)

  • Lee, Young-suk;Kim, Jungwhan
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.2036-2043
    • /
    • 2016
  • The $360^{\circ}$ Virtual Reality (VR) live-action movies are filmed by attaching multiple cameras to a rig to shoot the images omni-directionally. Especially, for a live-action film that requires a variety of scenes, the director of photography and his staff usually have to operate the rigged cameras directly all around the scene and edit the footage during the post-production stage so that the entire process can incur much time and high cost. However, it will also be possible to acquire high-quality omni-directional images with fewer staff if the camera rig(s) can be controlled remotely to allow more flexible camera walking. Thus, a $360^{\circ}$ VR filming system with remote-controlled camera rig has been proposed in this study. The movie producers will be able to produce the movies that provide greater immersion with this system.

Implementation of Omni-directional Image Viewer Program for Effective Monitoring (효과적인 감시를 위한 전방위 영상 기반 뷰어 프로그램 구현)

  • Jeon, So-Yeon;Kim, Cheong-Hwa;Park, Goo-Man
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.939-946
    • /
    • 2018
  • In this paper, we implement a viewer program that can monitor effectively using omni-directional images. The program consists of four modes: Normal mode, ROI(Region of Interest) mode, Tracking mode, and Auto-rotation mode, and the results for each mode is displayed simultaneously. In the normal mode, the wide angle image is rendered as a spherical image to enable pan, tilt, and zoom. In ROI mode, the area is displayed expanded by selecting an area. And, in Auto-rotation mode, it is possible to track the object by mapping the position of the object with the rotation angle of the spherical image to prevent the object from deviating from the spherical image in Tracking mode. Parallel programming for processing of multiple modes is performed to improve the processing speed. This has the advantage that various angles can be seen compared with surveillance system having a limited angle of view.

A Study on Automatic Detection of Speed Bump by using Mathematical Morphology Image Filters while Driving (수학적 형태학 처리를 통한 주행 중 과속 방지턱 자동 탐지 방안)

  • Joo, Yong Jin;Hahm, Chang Hahk
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.3
    • /
    • pp.55-62
    • /
    • 2013
  • This paper aims to detect Speed Bump by using Omni-directional Camera and to suggest Real-time update scheme of Speed Bump through Vision Based Approach. In order to detect Speed Bump from sequence of camera images, noise should be removed as well as spot estimated as shape and pattern for speed bump should be detected first. Now that speed bump has a regular form of white and yellow area, we extracted speed bump on the road by applying erosion and dilation morphological operations and by using the HSV color model. By collecting huge panoramic images from the camera, we are able to detect the target object and to calculate the distance through GPS log data. Last but not least, we evaluated accuracy of obtained result and detection algorithm by implementing SLAMS (Simultaneous Localization and Mapping system).

Design and Implementation of Automatic Detection Method of Corners of Grid Pattern from Distortion Corrected Image (왜곡보정 영상에서의 그리드 패턴 코너의 자동 검출 방법의 설계 및 구현)

  • Cheon, Sweung-Hwan;Jang, Jong-Wook;Jang, Si-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.11
    • /
    • pp.2645-2652
    • /
    • 2013
  • For a variety of vision systems such as car omni-directional surveillance systems and robot vision systems, many cameras have been equipped and used. In order to detect corners of grid pattern in AVM(Around View Monitoring) systems, after the non-linear radial distortion image obtained from wide-angle camera is corrected, corners of grids of the distortion corrected image must be detected. Though there are transformations such as Sub-Pixel and Hough transformation as corner detection methods for AVM systems, it is difficult to achieve automatic detection by Sub-Pixel and accuracy by Hough transformation. Therefore, we showed that the automatic detection proposed in this paper, which detects corners accurately from the distortion corrected image could be applied for AVM systems, by designing and implementing it, and evaluating its performance.