• Title/Summary/Keyword: Multiple camera

Search Result 531, Processing Time 0.023 seconds

Multiple Human Tracking using Mean Shift and Depth Map with a Moving Stereo Camera (카메라 이동환경에서 mean shift와 깊이 지도를 결합한 다수 인체 추적)

  • Kim, Kwang-Soo;Hong, Soo-Youn;Kwak, Soo-Yeong;Ahn, Jung-Ho;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.937-944
    • /
    • 2007
  • In this paper, we propose multiple human tracking with an moving stereo camera. The tracking process is based on mean shift algorithm which is using color information of the target. Color based tracking approach is invariant to translation and rotation of the target but, it has several problems. Because of mean shift uses color distribution, it is sensitive to color distribution of background and targets. In order to solve this problem, we combine color and depth information of target. Also, we build human body part model to handle occlusions and we have created adaptive box scale. As a result, the proposed method is simple and efficient to track multiple humans in real time.

Full-field Distortion Measurement of Virtual-reality Devices Using Camera Calibration and Probe Rotation (카메라 교정 및 측정부 회전을 이용한 가상현실 기기의 전역 왜곡 측정법)

  • Yang, Dong-Geun;Kang, Pilseong;Ghim, Young-Sik
    • Korean Journal of Optics and Photonics
    • /
    • v.30 no.6
    • /
    • pp.237-242
    • /
    • 2019
  • A compact virtual-reality (VR) device with wider field of view provides users with a more realistic experience and comfortable fit, but VR lens distortion is inevitable, and the amount of distortion must be measured for correction. In this paper, we propose two different full-field distortion-measurement methods, considering the characteristics of the VR device. The first is the distortion-measurement method using multiple images based on camera calibration, which is a well-known technique for the correction of camera-lens distortion. The other is the distortion-measurement method by measuring lens distortion at multiple measurement points by rotating a camera. Our proposed methods are verified by measuring the lens distortion of Google Cardboard, as a representative sample of a commercial VR device, and comparing our measurement results to a simulation using the nominal values.

Optimal Camera Placement Leaning of Multiple Cameras for 3D Environment Reconstruction (3차원 환경 복원을 위한 다수 카메라 최적 배치 학습 기법)

  • Kim, Ju-hwan;Jo, Dongsik
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.75-80
    • /
    • 2022
  • Recently, research and development on immersive virtual reality(VR) technology to provide a realistic experience is being widely conducted. To provide realistic experience in immersive virtual reality for VR participants, virtual environments should consist of high-realistic environments using 3D reconstruction. In this paper, to acquire 3D information in real space using multiple cameras in the reconstruction process, we propose a novel method of optimal camera placement for accurate reconstruction to minimize distortion of 3D information. Through our approach in this paper, real 3D information can obtain with minimized errors during environment reconstruction, and it is possible to provide a more immersive experience with the created virtual environment.

A Technique for Measuring Vibration Displacement Using Camera Image (카메라 영상을 이용한 진동변위 측정)

  • Son, Ki-Sung;Jeon, Hyeong-Seop;Park, Jin-Ho;Park, Jong Won
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.23 no.9
    • /
    • pp.789-796
    • /
    • 2013
  • Vibration measurements using image processing have been studied by many researchers as it can remotely measure vibration displacements at multiple points simultaneously. It is difficult, however, to obtain accurate displacement from the measured image signals because the resolution of image data is dependent on camera performance and normally lower than that of vibration transducer directly measured. This paper suggests the enhanced technique for vibration displacement measurement by applying the expected value of edge probability distribution to the varying pixel points in the image. The method can both increase the resolution limit of camera image and decrease the measurement errors. The working performance of the proposed technique is verified applying to the vibration measurement of a rotating machine.

Optical Camera Communication Based Lateral Vehicle Position Estimation Scheme Using Angle of LED Street Lights (LED 가로등의 각도를 이용한 광카메라통신기반 횡방향 차량 위치추정 기법)

  • Jeon, Hui-Jin;Yun, Soo-Keun;Kim, Byung Wook;Jung, Sung-Yoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.9
    • /
    • pp.1416-1423
    • /
    • 2017
  • Lane detection technology is one of the most important issues on car safety and self-driving capability of autonomous vehicle. This paper introduces an accurate lane detection scheme based on OCC(Optical Camera Communication) for moving vehicles. For lane detection of moving vehicles, the streetlights and the front camera of the vehicle were used for a transmitter and a receiver, respectively. Based on the angle information of multiple streetlights in a captured image, the distance from sidewalk can be calculated using non-linear regression analysis. Simulation results show that the proposed scheme shows robust performance of accurate lane detection.

Development of an Algorithm to Measure the Road Traffic Data Using Video Camera

  • Kim, Hie-Sik;Kim, Jin-Man
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.95.2-95
    • /
    • 2002
  • 1. Introduction of Camera Detection system Camera Detection system is an equipment that can detect realtime traffic information by image processing techniques. This information can be used to analyze and control road traffic flow. It is also used as a method to detect and control traffic flow for ITS(Intelligent Transportation System). Traffic information includes speed, head way, traffic flow, occupation time and length of queue. There are many detection systems for traffic data. But video detection system can detect multiple lanes with only one camera and collect various traffic information. So it is thought to be the most efficient method of all detection system. Though the...

  • PDF

Implementation on Surveillance Camera Optimum Angle Extraction using Polarizing Filter

  • Kim, Jaeseung;Park, Seungseo;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.10 no.2
    • /
    • pp.45-52
    • /
    • 2021
  • The surveillance camera market has developed and plays an important role in the field of video surveillance. However, in recent years, the identification of areas requiring surveillance has been limited by reflective light in the surveillance camera market. Cameras using polarization filters are being developed to reduce reflective light and facilitate identification. Programs are required to automatically adjust polarization filters. In this paper, we proposed an optimal angle extraction method from surveillance cameras using polarization filters through histogram analysis. First of all, transformed to grayscale to analyze the specifications of frames in multiple polarized angle images, reducing computational throughput. Then we generated and analyzed a histogram of the corresponding frame to extract the angle when the highlights are the fewest. Experiments with 0˚ and 90˚ showed high performance in extracting optimal angles. At this point, it is hoped this technology would be used for surveillance cameras in place like beach with a lot of reflective light.

Robust background acquisition and moving object detection from dynamic scene caused by a moving camera (움직이는 카메라에 의한 변화하는 환경하의 강인한 배경 획득 및 유동체 검출)

  • Kim, Tae-Ho;Jo, Kang-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.477-481
    • /
    • 2007
  • A background is a part where do not vary too much or frequently change in an image sequence. Using this assumption, it is presented a background acquisition algorithm for not only static but also dynamic view in this paper. For generating background, we detect a region, where has high correlation rate compared within selected region in the prior pyramid image, from the searching region in the current image. Between a detected region in the current image and a selected region in the prior image, we calculate movement vector for each regions in time sequence. After we calculate whole movement vectors for two successive images, vector histogram is used to determine the camera movement. The vector which has the highest density in the histogram is determined a camera movement. Using determined camera movement, we classify clusters based on pixel intensities which pixels are matched with prior pixels following camera movement. Finally we eliminate clusters which have lower weight than threshold, and combine remained clusters for each pixel to generate multiple background clusters. Experimental results show that we can automatically detect background whether camera move or not.

  • PDF

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Virtual portraits from rotating selfies

  • Yongsik Lee;Jinhyuk Jang;SeungjoonYang
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.291-303
    • /
    • 2023
  • Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.