• Title/Summary/Keyword: Multiple camera

Search Result 531, Processing Time 0.025 seconds

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Locally Initiating Line-Based Object Association in Large Scale Multiple Cameras Environment

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin;Cho, We-Duke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.358-379
    • /
    • 2010
  • Multiple object association is an important capability in visual surveillance system with multiple cameras. In this paper, we introduce locally initiating line-based object association with the parallel projection camera model, which can be applicable to the situation without the common (ground) plane. The parallel projection camera model supports the camera movement (i.e. panning, tilting and zooming) by using the simple table based compensation for non-ideal camera parameters. We propose the threshold distance based homographic line generation algorithm. This takes account of uncertain parameters such as transformation error, height uncertainty of objects and synchronization issue between cameras. Thus, the proposed algorithm associates multiple objects on demand in the surveillance system where the camera movement dynamically changes. We verify the proposed method with actual image frames. Finally, we discuss the strategy to improve the association performance by using the temporal and spatial redundancy.

Real-Time Detection of Moving Objects from Shaking Camera Based on the Multiple Background Model and Temporal Median Background Model (다중 배경모델과 순시적 중앙값 배경모델을 이용한 불안정 상태 카메라로부터의 실시간 이동물체 검출)

  • Kim, Tae-Ho;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.3
    • /
    • pp.269-276
    • /
    • 2010
  • In this paper, we present the detection method of moving objects based on two background models. These background models support to understand multi layered environment belonged in images taken by shaking camera and each model is MBM(Multiple Background Model) and TMBM (Temporal Median Background Model). Because two background models are Pixel-based model, it must have noise by camera movement. Therefore correlation coefficient calculates the similarity between consecutive images and measures camera motion vector which indicates camera movement. For the calculation of correlation coefficient, we choose the selected region and searching area in the current and previous image respectively then we have a displacement vector by the correlation process. Every selected region must have its own displacement vector therefore the global maximum of a histogram of displacement vectors is the camera motion vector between consecutive images. The MBM classifies the intensity distribution of each pixel continuously related by camera motion vector to the multi clusters. However, MBM has weak sensitivity for temporal intensity variation thus we use TMBM to support the weakness of system. In the video-based experiment, we verify the presented algorithm needs around 49(ms) to generate two background models and detect moving objects.

Multiple Camera-based Person Correspondence using Color Distribution and Context Information of Human Body (색상 분포 및 인체의 상황정보를 활용한 다중카메라 기반의 사람 대응)

  • Chae, Hyun-Uk;Seo, Dong-Wook;Kang, Suk-Ju;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.9
    • /
    • pp.939-945
    • /
    • 2009
  • In this paper, we proposed a method which corresponds people under the structured spaces with multiple cameras. The correspondence takes an important role for using multiple camera system. For solving this correspondence, the proposed method consists of three main steps. Firstly, moving objects are detected by background subtraction using a multiple background model. The temporal difference is simultaneously used to reduce a noise in the temporal change. When more than two people are detected, those detected regions are divided into each label to represent an individual person. Secondly, the detected region is segmented as features for correspondence by a criterion with the color distribution and context information of human body. The segmented region is represented as a set of blobs. Each blob is described as Gaussian probability distribution, i.e., a person model is generated from the blobs as a Gaussian Mixture Model (GMM). Finally, a GMM of each person from a camera is matched with the model of other people from different cameras by maximum likelihood. From those results, we identify a same person in different view. The experiment was performed according to three scenarios and verified the performance in qualitative and quantitative results.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Compressed Sensing-based Multiple-target Tracking Algorithm for Ad Hoc Camera Sensor Networks

  • Lu, Xu;Cheng, Lianglun;Liu, Jun;Chen, Rongjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1287-1300
    • /
    • 2018
  • Target-tracking algorithm based on ad hoc camera sensor networks (ACSNs) utilizes the distributed observation capability of nodes to achieve accurate target tracking. A compressed sensing-based multiple-target tracking algorithm (CSMTTA) for ACSNs is proposed in this work based on the study of camera node observation projection model and compressed sensing model. The proposed algorithm includes reconfiguration of observed signals and evaluation of target locations. It reconfigures observed signals by solving the convex optimization of L1-norm least and forecasts node group to evaluate a target location by the motion features of the target. Simulation results show that CSMTTA can recover the subtracted observation information accurately under the condition of sparse sampling to a high target-tracking accuracy and accomplish the distributed tracking task of multiple mobile targets.

Indoor Positioning System Based on Camera Sensor Network for Mobile Robot Localization in Indoor Environments (실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템)

  • Ji, Yonghoon;Yamashita, Atsushi;Asama, Hajime
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.11
    • /
    • pp.952-959
    • /
    • 2016
  • This paper proposes a novel indoor positioning system (IPS) that uses a calibrated camera sensor network and dense 3D map information. The proposed IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when objects (e.g., the mobile robot or pedestrians) are detected from multiple camera views. We evaluate the proposed IPS in a real environment with moving objects in a wireless camera sensor network. The results demonstrate that the proposed IPS can provide accurate position information for moving objects. This can improve the localization performance for mobile robot operation.

Analytic simulator and image generator of multiple-scattering Compton camera for prompt gamma ray imaging

  • Kim, Soo Mee
    • Biomedical Engineering Letters
    • /
    • v.8 no.4
    • /
    • pp.383-392
    • /
    • 2018
  • For prompt gamma ray imaging for biomedical applications and environmental radiation monitoring, we propose herein a multiple-scattering Compton camera (MSCC). MSCC consists of three or more semiconductor layers with good energy resolution, and has potential for simultaneous detection and differentiation of multiple radio-isotopes based on the measured energies, as well as three-dimensional (3D) imaging of the radio-isotope distribution. In this study, we developed an analytic simulator and a 3D image generator for a MSCC, including the physical models of the radiation source emission and detection processes that can be utilized for geometry and performance prediction prior to the construction of a real system. The analytic simulator for a MSCC records coincidence detections of successive interactions in multiple detector layers. In the successive interaction processes, the emission direction of the incident gamma ray, the scattering angle, and the changed traveling path after the Compton scattering interaction in each detector, were determined by a conical surface uniform random number generator (RNG), and by a Klein-Nishina RNG. The 3D image generator has two functions: the recovery of the initial source energy spectrum and the 3D spatial distribution of the source. We evaluated the analytic simulator and image generator with two different energetic point radiation sources (Cs-137 and Co-60) and with an MSCC comprising three detector layers. The recovered initial energies of the incident radiations were well differentiated from the generated MSCC events. Correspondingly, we could obtain a multi-tracer image that combined the two differentiated images. The developed analytic simulator in this study emulated the randomness of the detection process of a multiple-scattering Compton camera, including the inherent degradation factors of the detectors, such as the limited spatial and energy resolutions. The Doppler-broadening effect owing to the momentum distribution of electrons in Compton scattering was not considered in the detection process because most interested isotopes for biomedical and environmental applications have high energies that are less sensitive to Doppler broadening. The analytic simulator and image generator for MSCC can be utilized to determine the optimal geometrical parameters, such as the distances between detectors and detector size, thus affecting the imaging performance of the Compton camera prior to the development of a real system.

Effect of Noise Reduction by Installation of a Point to Point Speed Camera (실측자료를 통한 구간단속카메라의 소음저감효과 분석)

  • Son, Jin Hee;Chun, Hyung-Joon;Choung, Tae Ryaug;Park, Young Min;Kim, Deuk Sung
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.27 no.1
    • /
    • pp.57-64
    • /
    • 2017
  • This study was reviewed the noise reduction effects with installation of 'point to point speed camera' for controlling the speed of the car. The multiple regression analysis was performed to know how the relationship between the noise level and these parameters, such as measured traffic volume and rate of heavy vehicle and weighted average speed was changed with and without the 'point to point speed camera'. In the analysis results shows that the less traffic volume, the more noise reduction effect has been increased and the more traffic volume, the more noise reduction effect has been reduced. And noise reduction effects by the 'point to point speed camera' was different from each measured point. The cause of the difference was determined that inadequate 'point to point speed camera' position to see the effect of noise reduction. It is determined to require a more study to improve the noise reduction effects of the 'point to point speed camera' such as the camera position adjustment.