• Title/Summary/Keyword: multi camera

Search Result 879, Processing Time 0.022 seconds

The Improved Joint Bayesian Method for Person Re-identification Across Different Camera

  • Hou, Ligang;Guo, Yingqiang;Cao, Jiangtao
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.785-796
    • /
    • 2019
  • Due to the view point, illumination, personal gait and other background situation, person re-identification across cameras has been a challenging task in video surveillance area. In order to address the problem, a novel method called Joint Bayesian across different cameras for person re-identification (JBR) is proposed. Motivated by the superior measurement ability of Joint Bayesian, a set of Joint Bayesian matrices is obtained by learning with different camera pairs. With the global Joint Bayesian matrix, the proposed method combines the characteristics of multi-camera shooting and person re-identification. Then this method can improve the calculation precision of the similarity between two individuals by learning the transition between two cameras. For investigating the proposed method, it is implemented on two compare large-scale re-ID datasets, the Market-1501 and DukeMTMC-reID. The RANK-1 accuracy significantly increases about 3% and 4%, and the maximum a posterior (MAP) improves about 1% and 4%, respectively.

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

Research about Multi-spectral Photographing System (PKNU No.2) Development (다중영상촬영을 위한 PKNU 2호 개발에 관한 연구)

  • 최철웅;김호용;전성우
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.291-305
    • /
    • 2003
  • The cost of deploying Geological and Environmental information gathering systems, especially when such systems obtain remote sensing and photographic data through the use of commercial satellites and aircraft. Besides the high cost equipment required, adverse weather conditions can further restrict a researcher's ability to collect data anywhere and anytime. To mitigate this problem, we have developed a compact, multi-spectral automatic Aerial photographic system. This system's Multi-spectral camera is capable of the visible (RGB) and infrared (NIR) bands (3032*2008 pixel). It consists of a thermal infrared camera and automatic balance control, and can be managed by a palm-top computer. Other features includes a camera gimbal system, GPS receiver, weather sensor among others. We have evaluated the efficiency of this system in several field tests at the following locations: Kyongsang-bukdo beach, Nakdong river (at each site of mulkeum-namji and koryung-gumi), and Kyungahn River. Its tested ability in aerial photography, weather data, as well as GPS data acquisition demonstrates its flexibility as a tool for environmental data monitoring.

Three-axis Spring Element Modeling of Ball Bearing Applied to EO/IR Camera and Structural Response Analysis of EO/IR Camera (EO/IR 카메라에 적용된 볼 베어링의 3축 스프링 요소 모델 및 EO/IR 카메라의 구조 응답해석)

  • Cho, Hee-Keun;Rhee, Ju-Hun;Lee, Jun-Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.39 no.12
    • /
    • pp.1160-1165
    • /
    • 2011
  • This study is focused on the structural dynamic responses, i.e., vibration analysis results of the high-accuracy observation multi-axial camera, which is installed and operated for the UAV (Unmanned Aerial Vehicle) and helicopter etc. And, the authors newly suggest a modeling technology of the ball bearing applied to the camera by using three-axis spring elements. The vibration analysis results well agreed to the randum vibration test results. Also, the vibration responses characteristics of the multi-axial camera through the time history analysis of the random vibration were analyzed and evaluated. The above results can be applied to the FE-modeling of the ball bearings used for the space cameras.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

Development of a Reliable Real-time 3D Reconstruction System for Tiny Defects on Steel Surfaces (금속 표면 미세 결함에 대한 신뢰성 있는 실시간 3차원 형상 추출 시스템 개발)

  • Jang, Yu Jin;Lee, Joo Seob
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.12
    • /
    • pp.1061-1066
    • /
    • 2013
  • In the steel industry, the detection of tiny defects including its 3D characteristics on steel surfaces is very important from the point of view of quality control. A multi-spectral photometric stereo method is an attractive scheme because the shape of the defect can be obtained based on the images which are acquired at the same time by using a multi-channel camera. Moreover, the calculation time required for this scheme can be greatly reduced for real-time application with the aid of a GPU (Graphic Processing Unit). Although a more reliable shape reconstruction of defects can be possible when the numbers of available images are increased, it is not an easy task to construct a camera system which has more than 3 channels in the visible light range. In this paper, a new 6-channel camera system, which can distinguish the vertical/horizontal linearly polarized lights of RGB light sources, was developed by adopting two 3-CCD cameras and two polarized lenses based on the fact that the polarized light is preserved on the steel surface. The photometric stereo scheme with 6 images was accelerated by using a GPU, and the performance of the proposed system was validated through experiments.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

A New Calibration of 3D Point Cloud using 3D Skeleton (3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션)

  • Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.247-257
    • /
    • 2021
  • This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

Experimental Framework for Controller Design of a Rotorcraft Unmanned Aerial Vehicle Using Multi-Camera System

  • Oh, Hyon-Dong;Won, Dae-Yeon;Huh, Sung-Sik;Shim, David Hyun-Chul;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.2
    • /
    • pp.69-79
    • /
    • 2010
  • This paper describes the experimental framework for the control system design and validation of a rotorcraft unmanned aerial vehicle (UAV). Our approach follows the general procedure of nonlinear modeling, linear controller design, nonlinear simulation and flight test but uses an indoor-installed multi-camera system, which can provide full 6-degree of freedom (DOF) navigation information with high accuracy, to overcome the limitation of an outdoor flight experiment. In addition, a 3-DOF flying mill is used for the performance validation of the attitude control, which considers the characteristics of the multi-rotor type rotorcraft UAV. Our framework is applied to the design and mathematical modeling of the control system for a quad-rotor UAV, which was selected as the test-bed vehicle, and the controller design using the classical proportional-integral-derivative control method is explained. The experimental results showed that the proposed approach can be viewed as a successful tool in developing the controller of new rotorcraft UAVs with reduced cost and time.