• Title/Summary/Keyword: Multiple camera

Search Result 531, Processing Time 0.023 seconds

Multiple Pedestrians Detection using Motion Information and Support Vector Machine from a Moving Camera Image (이동 카메라 영상에서 움직임 정보와 Support Vector Machine을 이용한 다수 보행자 검출)

  • Lim, Jong-Seok;Park, Hyo-Jin;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.250-257
    • /
    • 2011
  • In this paper, we proposed the method detecting multiple pedestrians using motion information and SVM(Support Vector Machine) from a moving camera image. First, we detect moving pedestrians from both the difference image and the projection histogram which is compensated for the camera ego-motion using corresponding feature sets. The difference image is simple method but it is not detected motionless pedestrians. Thus, to fix up this problem, we detect motionless pedestrians using SVM The SVM works well particularly in binary classification problem such as pedestrian detection. However, it is not detected in case that the pedestrians are adjacent or they move arms and legs excessively in the image. Therefore, in this paper, we proposed the method detecting motionless and adjacent pedestrians as well as people who take excessive action in the image using motion information and SVM The experimental results on our various test video sequences demonstrated the high efficiency of our approach as it had shown an average detection ratio of 94% and False Positive of 2.8%.

Biomimetic approach object detection sensors using multiple imaging (다중 영상을 이용한 생체모방형 물체 접근 감지 센서)

  • Choi, Myoung Hoon;Kim, Min;Jeong, Jae-Hoon;Park, Won-Hyeon;Lee, Dong Heon;Byun, Gi-Sik;Kim, Gwan-Hyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.91-93
    • /
    • 2016
  • From the 2-D image extracting three-dimensional information as the latter is in the bilateral sibeop using two camera method and when using a monocular camera as a very important step generally as "stereo vision". There in today's CCTV and automatic object tracking system used in many medium much to know the site conditions or work developed more clearly by using a stereo camera that mimics the eyes of humans to maximize the efficiency of avoidance / control start and multiple jobs can do. Object tracking system of the existing 2D image will have but can not recognize the distance to the transition could not be recognized by the observer display using a parallax of a stereo image, and the object can be more effectively controlled.

  • PDF

A Study on the Production of Perspective Images using Drone (드론을 이용한 다시점 투영 이미지 제작 연구)

  • Choi, Ki-chang;Kwon, Soon-chul;Lee, Seung-hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.953-958
    • /
    • 2022
  • Holographic Stereogram can provide the depth perception without the visual fatigue and dizziness because it use multiple images acquired from the multiple viewpoints. In order to produce a holographic stereogram, it is necessary to obtain perspective images of a live object and record it on film using a digital hologram printer. when acquiring perspective images, the hologram without distortion can be produced only when the perspective images with a constant distance between the camera and the target is obtained. If the target is small, it is possible to keep the constant distance from the camera to object. but if it is large, this is difficult to keep the constant distance. In this study, we photograph the large object using the POI (Point of Interest) function which is one of the smart flight modes of drone to produce perspective images required for the hologram production. after that, problems such as the unexpected shakings and distance change between camera and object is corrected in post production. as a result, we produce the perspective images.

Segmentation of Moving Multiple Vehicles using Logic Operations (논리연산을 이용한 주행차량 영상분할)

  • Choi Kiho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.1 no.1
    • /
    • pp.10-16
    • /
    • 2002
  • In this paper, a novel algorithm for segmentation of moving multiple vehicles in video sequences using logic operations is proposed. For the case of multiple vehicles in a scene, the proposed algorithm begins with a robust double-edge image derived from the difference between two successive frames using exclusive OR operation. After extracting only the edges of moving multiple vehicles using Laplacian filter, AND operation and dilation operation, the image is segmented into moving multiple vehicle image. The features of moving vehicles can be directly extracted from the segmented images. The proposed algorithm has no the two preprocessing steps, so it can reduce noises which are norm at in preprocessing of the original images. The algorithm is more simplified using logic operations. The proposed algorithm is evaluated on an outdoor video sequence with moving multiple vehicles in 90,000 frames of 30fps by a low-end video camera and produces promising results.

  • PDF

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

Developing an Occupants Count Methodology in Buildings Using Virtual Lines of Interest in a Multi-Camera Network (다중 카메라 네트워크 가상의 관심선(Line of Interest)을 활용한 건물 내 재실자 인원 계수 방법론 개발)

  • Chun, Hwikyung;Park, Chanhyuk;Chi, Seokho;Roh, Myungil;Susilawati, Connie
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.667-674
    • /
    • 2023
  • In the event of a disaster occurring within a building, the prompt and efficient evacuation and rescue of occupants within the building becomes the foremost priority to minimize casualties. For the purpose of such rescue operations, it is essential to ascertain the distribution of individuals within the building. Nevertheless, there is a primary dependence on accounts provided by pertinent individuals like building proprietors or security staff, alongside fundamental data encompassing floor dimensions and maximum capacity. Consequently, accurate determination of the number of occupants within the building holds paramount significance in reducing uncertainties at the site and facilitating effective rescue activities during the golden hour. This research introduces a methodology employing computer vision algorithms to count the number of occupants within distinct building locations based on images captured by installed multiple CCTV cameras. The counting methodology consists of three stages: (1) establishing virtual Lines of Interest (LOI) for each camera to construct a multi-camera network environment, (2) detecting and tracking people within the monitoring area using deep learning, and (3) aggregating counts across the multi-camera network. The proposed methodology was validated through experiments conducted in a five-story building with the average accurary of 89.9% and the average MAE of 0.178 and RMSE of 0.339, and the advantages of using multiple cameras for occupant counting were explained. This paper showed the potential of the proposed methodology for more effective and timely disaster management through common surveillance systems by providing prompt occupancy information.

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

Performance Simulation of Various Feature-Initialization Algorithms for Forward-Viewing Mono-Camera-Based SLAM (전방 모노카메라 기반 SLAM 을 위한 다양한 특징점 초기화 알고리즘의 성능 시뮬레이션)

  • Lee, Hun;Kim, Chul Hong;Lee, Tae-Jae;Cho, Dong-Il Dan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.833-838
    • /
    • 2016
  • This paper presents a performance evaluation of various feature-initialization algorithms for forward-viewing mono-camera based simultaneous localization and mapping (SLAM), specifically in indoor environments. For mono-camera based SLAM, the position of feature points cannot be known from a single view; therefore, it should be estimated from a feature initialization method using multiple viewpoint measurements. The accuracy of the feature initialization method directly affects the accuracy of the SLAM system. In this study, four different feature initialization algorithms are evaluated in simulations, including linear triangulation; depth parameterized, linear triangulation; weighted nearest point triangulation; and particle filter based depth estimation algorithms. In the simulation, the virtual feature positions are estimated when the virtual robot, containing a virtual forward-viewing mono-camera, moves forward. The results show that the linear triangulation method provides the best results in terms of feature-position estimation accuracy and computational speed.

Generating a Stereoscopic Image from a Monoscopic Camera (단안 카메라를 이용한 입체영상 생성)

  • Lee, Dong-Woo;Lee, Kwan-Wook;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.17-25
    • /
    • 2012
  • In this paper, we propose a method of producing a stereoscopic image from multiple images captured from a monoscopic camera. By translating a camera in the horizontal direction, left and right images are chosen among N captured images. For this, image edges are extracted and a rotational angle is estimated from edge orientation. Also, a translational vector is also estimated from the correlation of projected image data. Then, two optimal images are chosen and subsequently compensated using the rotational angle as well as the translational vector in order to make a satisfactory stereoscopic image. The proposed method was performed on thirty-two test image set. The subjective visual fatigue test was carried out to validate the 3D quality of stereoscopic images. In terms of visual fatigue, the 3D satisfaction ratio reached approximately 84%.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF