• Title/Summary/Keyword: multiple cameras

Search Result 225, Processing Time 0.025 seconds

A leak detection and 3D source localization method on a plant piping system by using multiple cameras

  • Kim, Se-Oh;Park, Jae-Seok;Park, Jong Won
    • Nuclear Engineering and Technology
    • /
    • v.51 no.1
    • /
    • pp.155-162
    • /
    • 2019
  • To reduce the secondary damage caused by leakage accidents in plant piping systems, a constant surveillance system is necessary. To ensure leaks are promptly addressed, the surveillance system should be able to detect not only the leak itself, but also the location of the leak. Recently, research to develop new methods has been conducted using cameras to detect leakage and to estimate the location of leakage. However, existing methods solely estimate whether a leak exists or not, or only provide two-dimensional coordinates of the leakage location. In this paper, a method using multiple cameras to detect leakage and estimate the three-dimensional coordinates of the leakage location is presented. Leakage is detected by each camera using MADI(Moving Average Differential Image) and histogram analysis. The two-dimensional leakage location is estimated using the detected leakage area. The three-dimensional leakage location is subsequently estimated based on the two-dimensional leakage location. To achieve this, the coordinates (x, z) for the leakage are calculated for a horizontal section (XZ plane) in the monitoring area. Then, the y-coordinate of leakage is calculated using a vertical section from each camera. The method proposed in this paper could accurately estimate the three-dimensional location of a leak using multiple cameras.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

Sector Based Multiple Camera Collaboration for Active Tracking Applications

  • Hong, Sangjin;Kim, Kyungrog;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1299-1319
    • /
    • 2017
  • This paper presents a scalable multiple camera collaboration strategy for active tracking applications in large areas. The proposed approach is based on distributed mechanism but emulates the master-slave mechanism. The master and slave cameras are not designated but adaptively determined depending on the object dynamic and density distribution. Moreover, the number of cameras emulating the master is not fixed. The collaboration among the cameras utilizes global and local sectors in which the visual correspondences among different cameras are determined. The proposed method combines the local information to construct the global information for emulating the master-slave operations. Based on the global information, the load balancing of active tracking operations is performed to maximize active tracking coverage of the highly dynamic objects. The dynamics of all objects visible in the local camera views are estimated for effective coverage scheduling of the cameras. The active tracking synchronization timing information is chosen to maximize the overall monitoring time for general surveillance operations while minimizing the active tracking miss. The real-time simulation result demonstrates the effectiveness of the proposed method.

Learning Spatio-Temporal Topology of a Multiple Cameras Network by Tracking Human Movement (사람의 움직임 추적에 근거한 다중 카메라의 시공간 위상 학습)

  • Nam, Yun-Young;Ryu, Jung-Hun;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.7
    • /
    • pp.488-498
    • /
    • 2007
  • This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs) in Ubiquitous Smart Space (USS). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.

Summarization of Soccer Video based on Multiple Cameras Using Dynamic Bayesian Network (동적 베이지안 네트워크를 이용한 다중 카메라기반 축구 비디오 요약)

  • Min, Jun-Ki;Park, Han-Saem;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.567-571
    • /
    • 2009
  • Sports game broadcasting system uses multiple video cameras in order to offer exciting and dynamic scenes for the TV audiences. Since, however, the traditional broadcasting system edits the multiple views into a static video stream, it is difficult to provide the intelligent broadcasting service that summarizes or retrieves specific scenes or events based on the user preference. In this paper, we propose the summarization and retrieval system for the soccer videos based on multiple cameras. It extracts the highlights such as shot on goal, crossing, foul, and set piece using dynamic Bayesian network based on soccer players' primitive behaviors annotated on videos, and selects a proper view for each highlight according to its type. The proposed system, therefore, offers users the highlight summarization or preferred view selection, and can provide personalized broadcasting services by considering the user's preference.

  • PDF

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.1
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Multiple Human Recognition for Networked Camera based Interactive Control in IoT Space

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.1
    • /
    • pp.39-45
    • /
    • 2019
  • We propose an active color model based method for tracking motions of multiple human using a networked multiple-camera system in IoT space as a human-robot coexistent system. An IoT space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of IoT space as well. One of the main goals of IoT space is to assist humans and to do different services for them. In order to be capable of doing that, IoT space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and IoT space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in IoT space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

A Surveillance System Combining Model-based Multiple Person Tracking and Non-overlapping Cameras (모델기반 다중 사람추적과 다수의 비겹침 카메라를 결합한 감시시스템)

  • Lee Youn-Mi;Lee Kyoung-Mi
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.4
    • /
    • pp.241-253
    • /
    • 2006
  • In modem societies, a monitoring system is required to automatically detect and track persons from several cameras scattered in a wide area. Combining multiple cameras with non-overlapping views and a tracking technique, we propose a method that tracks automatically the target persons in one camera and transfers the tracking information to other networked cameras through a server. So the proposed method tracks thoroughly the target persons over the cameras. In this paper, we use a person model to detect and distinguish the corresponding person and to transfer the person's tracking information. A movement of the tracked persons is defined on FOV lines of the networked cameras. The tracked person has 6 statuses. The proposed system was experimented in several indoor scenario. We achieved 91.2% in an averaged tracking rate and 96% in an averaged status rate.

High Resolution 360 degree Video Generation System using Multiple Cameras (다수의 카메라를 이용한 고해상도 360도 동영상 생성 시스템)

  • Jeong, Jinwook;Jun, Kyungkoo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1329-1336
    • /
    • 2016
  • This paper develops a 360 degree video system using multiple off-the-shelf webcams and a set of embedded boards. Existing 360 degree cameras have shortcomings that they do not support real-time video generation since recorded videos should be copied to computers or smartphones which then provide stitching. Another shortcoming is that wide FoV(Field of View) cameras are not able to provide sufficiently high resolution. Moreover, resulting images are visually distorted bending straight lines. By employing an array of 65 degree FoV webcams, we were able to generate videos on the spot and achieve over 6K resolution with much less distortion. We describe the configuration and algorithms of the proposed system. The performance evaluation results of our early stage prototype system are presented.

An Efficient Real-Time Image Reconstruction Scheme using Network m Multiple View and Multiple Cluster Environments (다시점 및 다중클러스터 환경에서 네트워크를 이용한 효율적인 실시간 영상 합성 기법)

  • You, Kang-Soo;Lim, Eun-Cheon;Sim, Chun-Bo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.11
    • /
    • pp.2251-2259
    • /
    • 2009
  • We propose an algorithm and system which generates 3D stereo image by composition of 2D image from 4 multiple clusters which 1 cluster was composed of 4 multiple cameras based on network. Proposed Schemes have a network-based client-server architecture for load balancing of system caused to process a large amounts of data with real-time as well as multiple cluster environments. In addition, we make use of JPEG compression and RAM disk method for better performance. Our scheme first converts input images from 4 channel, 16 cameras to binary image. And then we generate 3D stereo images after applying edge detection algorithm such as Sobel algorithm and Prewiit algorithm used to get disparities from images of 16 multiple cameras. With respect of performance results, the proposed scheme takes about 0.05 sec. to transfer image from client to server as well as 0.84 to generate 3D stereo images after composing 2D images from 16 multiple cameras. We finally confirm that our scheme is efficient to generate 3D stereo images in multiple view and multiple clusters environments with real-time.