• Title/Summary/Keyword: multiple cameras

Search Result 222, Processing Time 0.029 seconds

Procedural Geometry Calibration and Color Correction ToolKit for Multiple Cameras (절차적 멀티카메라 기하 및 색상 정보 보정 툴킷)

  • Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.615-618
    • /
    • 2021
  • Recently, 3D reconstruction of real objects with multi-cameras has been widely used for many services such as VR/AR, motion capture, and plenoptic video generation. For accurate 3D reconstruction, geometry and color matching between multiple cameras will be needed. However, previous calibration and correction methods for geometry (internal and external parameters) and color (intensity) correction is difficult for non-majors to perform manually. In this paper, we propose a toolkit with procedural geometry calibration and color correction among cameras with different positions and types. Our toolkit consists of an easy user interface and turned out to be effective in setting up multi-cameras for reconstruction.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Locally Initiating Line-Based Object Association in Large Scale Multiple Cameras Environment

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin;Cho, We-Duke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.358-379
    • /
    • 2010
  • Multiple object association is an important capability in visual surveillance system with multiple cameras. In this paper, we introduce locally initiating line-based object association with the parallel projection camera model, which can be applicable to the situation without the common (ground) plane. The parallel projection camera model supports the camera movement (i.e. panning, tilting and zooming) by using the simple table based compensation for non-ideal camera parameters. We propose the threshold distance based homographic line generation algorithm. This takes account of uncertain parameters such as transformation error, height uncertainty of objects and synchronization issue between cameras. Thus, the proposed algorithm associates multiple objects on demand in the surveillance system where the camera movement dynamically changes. We verify the proposed method with actual image frames. Finally, we discuss the strategy to improve the association performance by using the temporal and spatial redundancy.

High accuracy online 3D-reconstruction by multiple cameras

  • Oota, Yoshikazu;Pan, Yaodong;Furuta, Katuhisa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1749-1752
    • /
    • 2005
  • For online high accurate reconstruction of an object from an visual information, a linear reconstruction method for multiple images is popular. Basically this method needs many cameras or many different screen shots from different view points. This method, however, has the benefit of less calculation and is adequate for a real time application by comparing other popular method. In this paper, online reconstruction system using more than three cameras is treated. An evaluation method of cameras' position, and of the number is derived for the linear reconstruction method. To decrease errors that are caused from skew of lens, positional error between corresponding points is taken into consideration on the evaluation. The proposed evaluation method enables estimation of the adequate number of cameras and then of feasible view locations. Additionally, repeating search of epipolar lines enables estimation of the hidden point. Comparing with result of an average error analysis, it was confirmed that the proposed methods works effectively.

  • PDF

Multiple Camera Collaboration Strategies for Dynamic Object Association

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1169-1193
    • /
    • 2010
  • In this paper, we present and compare two different multiple camera collaboration strategies to reduce false association in finding the correspondence of objects. Collaboration matrices are defined with the required minimum separation for an effective collaboration because homographic lines for objects association are ineffective with the insufficient separation. The first strategy uses the collaboration matrices to select the best pair out of many cameras having the maximum separation to efficiently collaborate on the object association. The association information in selected cameras is propagated to unselected cameras by the global information constructed from the associated targets. While the first strategy requires the long operation time to achieve the high association rate due to the limited view by the best pair, it reduces the computational cost using homographic lines. The second strategy initiates the collaboration process of objects association for all the pairing cases of cameras regardless of the separation. In each collaboration process, only crossed targets by a transformed homographic line from the other collaborating camera generate homographic lines. While the repetitive association processes improve the association performance, the transformation processes of homographic lines increase exponentially. The proposed methods are evaluated with real video sequences and compared in terms of the computational cost and the association performance. The simulation results demonstrate that the proposed methods effectively reduce the false association rate as compared with basic pair-wise collaboration.

A Best View Selection Method in Videos of Interested Player Captured by Multiple Cameras (다중 카메라로 관심선수를 촬영한 동영상에서 베스트 뷰 추출방법)

  • Hong, Hotak;Um, Gimun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1319-1332
    • /
    • 2017
  • In recent years, the number of video cameras that are used to record and broadcast live sporting events has increased, and selecting the shots with the best view from multiple cameras has been an actively researched topic. Existing approaches have assumed that the background in video is fixed. However, this paper proposes a best view selection method for cases in which the background is not fixed. In our study, an athlete of interest was recorded in video during motion with multiple cameras. Then, each frame from all cameras is analyzed for establishing rules to select the best view. The frames were selected using our system and are compared with what human viewers have indicated as being the most desirable. For the evaluation, we asked each of 20 non-specialists to pick the best and worst views. The set of the best views that were selected the most coincided with 54.5% of the frame selection using our proposed method. On the other hand, the set of views most selected as worst through human selection coincided with 9% of best view shots selected using our method, demonstrating the efficacy of our proposed method.

Real Time Object Tracking Method using Multiple Cameras (다중 카메라를 이용한 실시간 객체 추적 방법)

  • Jang, In-Tae;Kim, Dong-Woo;Song, Young-Jun;Kwon, Hyeok-Bong;Ahn, Jae-Hyeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.4
    • /
    • pp.51-59
    • /
    • 2012
  • Recently, the study about object tracking using image processing has been active in the field of security and surveillance. Existing security and surveillance systems using multiple cameras have been operating independently. Thus, the chase was difficult when the tracking object move to other monitored areas. In this paper, we propose the way to change the control of camera automatically following the moving direction of objects in multiple cameras. The proposed method detects the object and tracks the object using color information and direction information of object. The color information obtains using the hue and the direction information obtains using the optical flow. At this time, the optical flow is detected for the entire image area of an object that is not applied only to reduce the computational complexity makes it possible to track in real time. In addition, it can be solved to inconvenience of security surveillance system to use existing camera by tracking an object automatically.

A Study on Detecting Moving Objects using Multiple Fisheye Cameras (다중 어안 카메라를 이용한 움직이는 물체 검출 연구)

  • Bae, Kwang-Hyuk;Suhr, Jae-Kyu;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.32-40
    • /
    • 2008
  • Since vision-based surveillance system uses a conventional camera which has a narrow field of view, it is difficult to apply it into the environment whose the ceiling is low and the monitoring area is wide. To overcome this problem, the method of increasing the number of camera causes the increase of the cost and the difficulties of camera set-up For these problems, we propose a new surveillance system based on multiple fisheye cameras which have 180 degree field of view. The proposed method handles occlusions using the homography relation between the multiple fisheye cameras. In the experiment, four fisheye cameras were set up within the area of $17{\times}14m$ at height of 2.5 m and five people wandered and crossed with one another within this area. The detection rates of the proposed system was 83.0% while that of a single camera was 46.1%.

Accident Detection System for Construction Sites Using Multiple Cameras and Object Detection (다중 카메라와 객체 탐지를 활용한 건설 현장 사고 감지 시스템)

  • Min hyung Kim;Min sung Kam;Ho sung Ryu;Jun hyeok Park;Min soo Jeon;Hyeong woo Choi;Jun-Ki Min
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.605-611
    • /
    • 2023
  • Accidents at construction sites have a very high rate of fatalities due to the nature of being prone to severe injury patients. In order to reduce the mortality rate of severely injury patients, quick response is required, and some systems that detect accidents using AI technology and cameras have been devised to respond quickly to accidents. However, since existing accident detection systems use only a single camera, there are blind spots, Thus, they cannot detect all accidents at a construction site. Therefore, in this paper, we present the system that minimizes the detection blind spot by using multiple cameras. Our implemented system extracts feature points from the images of multiple cameras with the YOLO-pose library, and inputs the extracted feature points to a Long Short Term Memory-based recurrent neural network in order to detect accidents. In our experimental result, we confirme that the proposed system shows high accuracy while minimizing detection blind spots by using multiple cameras.