• Title/Summary/Keyword: Multiple camera

Search Result 531, Processing Time 0.025 seconds

Development of a High Resolution Cinematic Particle Image Velocimetry and Its Application to measurement of Unsteady Complex Turbulent Flows (고분해능 Cinematic PIV 시스템의 개발과 비정상 복잡 난류유동측정에의 응용)

  • Kim, Kyung-Chun;Park, Kyung-Hyun
    • Proceedings of the KSME Conference
    • /
    • 2001.06e
    • /
    • pp.536-541
    • /
    • 2001
  • A high resolution digital cinematic Particle Image Velocimetry(PIV) has been developed. The system consists of a high speed CCD camera, a continuous Ar-ion laser and a computer with camera controller. To improve the spatial resolution, we adopt a Recursive Technique for velocity interrogation. At first, we obtain a velocity vector for a larger interrogation window size based on the conventional two-frame cross-correlation PIV analysis using the FFT algorithm. Based on the knowing velocity information, more spatially resolved velocity vectors are obtained in the next iteration step with smaller interrogation windows. The correct velocity vector at the first step is found to be critical, so we apply a Multiple Correlation Validation(MCV) technique in order to decrease the spurious vectors. The MCV technique turns out to improve SNR(Signal to Noise Ratio) of the correlation table. The developed cinematic PIV method has been applied to the measurement of the unsteady flow characteristics of a Rushton turbine mixer. A total of 3,245 instantaneous velocity vectors were successfully obtained with 4 ms time resolution. The acquired spatial resolution corresponds the performance of the conventional high resolution digital PIV system using a $1K{\times}1K$ CCD camera.

  • PDF

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Viewpoint Invariant Person Re-Identification for Global Multi-Object Tracking with Non-Overlapping Cameras

  • Gwak, Jeonghwan;Park, Geunpyo;Jeon, Moongu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2075-2092
    • /
    • 2017
  • Person re-identification is to match pedestrians observed from non-overlapping camera views. It has important applications in video surveillance such as person retrieval, person tracking, and activity analysis. However, it is a very challenging problem due to illumination, pose and viewpoint variations between non-overlapping camera views. In this work, we propose a viewpoint invariant method for matching pedestrian images using orientation of pedestrian. First, the proposed method divides a pedestrian image into patches and assigns angle to a patch using the orientation of the pedestrian under the assumption that a person body has the cylindrical shape. The difference between angles are then used to compute the similarity between patches. We applied the proposed method to real-time global multi-object tracking across multiple disjoint cameras with non-overlapping field of views. Re-identification algorithm makes global trajectories by connecting local trajectories obtained by different local trackers. The effectiveness of the viewpoint invariant method for person re-identification was validated on the VIPeR dataset. In addition, we demonstrated the effectiveness of the proposed approach for the inter-camera multiple object tracking on the MCT dataset with ground truth data for local tracking.

Development of a High Resolution Digital Cinematic Particle Image Velocimetry (고해상도 Cinematic PIV의 개발)

  • Park, Gyeong-Hyeon;Kim, Gyeong-Cheon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.25 no.11
    • /
    • pp.1535-1542
    • /
    • 2001
  • A high resolution digital cinematic Particle Image Velocimetry(PIV) has been developed. The system consists of a high speed CCD camera, a continuous Ar-ion laser and a computer with camera controller. To improve the spatial resolution, we adopt a Recursive Technique for velocity interrogation. At first, we obtain a velocity vector fur a larger interrogation window size based on the conventional two-frame cross-correlation PIV analysis using the FFT algorithm. Based on the knowing velocity information, more spatially resolved velocity vectors are obtained in the next iteration step with smaller interrogation windows. When the correct velocity vector at the first step is found to be critical, a Multiple Correlation Validation(MCV) technique is applied to decrease the spurious vectors. The MCV technique turns out to improve SNR(Signal to Noise Ratio) of the correlation table. The developed cinematic PIV method has been applied to the measurement of the unsteady flow characteristics of a Rushton turbine mixer. A total of 3,245 instantaneous velocity vectors were successfully obtained with 4 ms time resolution. The acquired spatial resolution corresponds to the conventional high resolution digital PIV system using a 1K ${\times}$ 1K CCD camera.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Sector Based Multiple Camera Collaboration for Active Tracking Applications

  • Hong, Sangjin;Kim, Kyungrog;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1299-1319
    • /
    • 2017
  • This paper presents a scalable multiple camera collaboration strategy for active tracking applications in large areas. The proposed approach is based on distributed mechanism but emulates the master-slave mechanism. The master and slave cameras are not designated but adaptively determined depending on the object dynamic and density distribution. Moreover, the number of cameras emulating the master is not fixed. The collaboration among the cameras utilizes global and local sectors in which the visual correspondences among different cameras are determined. The proposed method combines the local information to construct the global information for emulating the master-slave operations. Based on the global information, the load balancing of active tracking operations is performed to maximize active tracking coverage of the highly dynamic objects. The dynamics of all objects visible in the local camera views are estimated for effective coverage scheduling of the cameras. The active tracking synchronization timing information is chosen to maximize the overall monitoring time for general surveillance operations while minimizing the active tracking miss. The real-time simulation result demonstrates the effectiveness of the proposed method.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

Multiple objects focusing based on image segmentation using radius of PSF (점확산함수 반지름을 사용한 영상분할 기반 다중객체 자동초점)

  • 김기만;황성현;신정호;백준기
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.7-10
    • /
    • 2003
  • This paper proposes the multiple objects focusing algorithm. Given multiple objects at different distances from a camera, we assume that one object is well-focused and the others are out-of-focused. The proposed auto-focusing algorithm is summarized as follows: (i) detects edges from an input image, (ⅱ) estimates the radius of PSF (Point Spread Function) across the edge, (ⅲ) gather edge points having same radius of PSF, (ⅳ) segments the image into regions with the same radius of PSF, and (ⅴ) restores the each segmented region using the corresponding PSF.

  • PDF

Multiple Moving Person Tracking Based on the IMPRESARIO Simulator

  • Kim, Hyun-Deok;Jin, Tae-Seok
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.3
    • /
    • pp.331-336
    • /
    • 2008
  • In this paper, we propose a real-time people tracking system with multiple CCD cameras for security inside the building. To achieve this goal, we present a method for 3D walking human tracking based on the IMPRESARIO framework incorporating cascaded classifiers into hypothesis evaluation. The efficiency of adaptive selection of cascaded classifiers has been also presented. The camera is mounted from the ceiling of the laboratory so that the image data of the passing people are fully overlapped. The implemented system recognizes people movement along various directions. To track people even when their images are partially overlapped, the proposed system estimates and tracks a bounding box enclosing each person in the tracking region. The approximated convex hull of each individual in the tracking area is obtained to provide more accurate tracking information. We have shown the improvement of reliability for likelihood calculation by using cascaded classifiers. Experimental results show that the proposed method can smoothly and effectively detect and track walking humans through environments such as dense forests.

Multiple Pedestrians Detection and Tracking using Histogram and Color Information from a Moving Camera (이동 카메라 영상에서 히스토그램과 컬러 정보를 이용한 다수 보행자 검출 및 추적)

  • 임종석;곽현욱;김욱현
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.193-202
    • /
    • 2004
  • This paper presents a novel histogram and color information based algorithm for detecting and tracking multiple pedestrians from a moving camera. In the proposed method, RGB color histogram is used to detect adjacent pedestrians and RGB mean value is used to track detected pedestrians. Therefore, our algorithm detect contiguous or a few occluded pedestrians and track in case pedestrian's shape change. The experimental results on our test sequences demonstrate the high efficiency of our method.