• Title/Summary/Keyword: Multiple camera

Search Result 531, Processing Time 0.024 seconds

The Long Distance Face Recognition using Multiple Distance Face Images Acquired from a Zoom Camera (줌 카메라를 통해 획득된 거리별 얼굴 영상을 이용한 원거리 얼굴 인식 기술)

  • Moon, Hae-Min;Pan, Sung Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1139-1145
    • /
    • 2014
  • User recognition technology, which identifies or verifies a certain individual is absolutely essential under robotic environments for intelligent services. The conventional face recognition algorithm using single distance face image as training images has a problem that face recognition rate decreases as distance increases. The face recognition algorithm using face images by actual distance as training images shows good performance but this has a problem that it requires user cooperation. This paper proposes the LDA-based long distance face recognition method which uses multiple distance face images from a zoom camera for training face images. The proposed face recognition technique generated better performance by average 7.8% than the technique using the existing single distance face image as training. Compared with the technique that used face images by distance as training, the performance fell average 8.0%. However, the proposed method has a strength that it spends less time and requires less cooperation to users when taking face images.

Self-calibration of a Multi-camera System using Factorization Techniques for Realistic Contents Generation (실감 콘텐츠 생성을 위한 분해법 기반 다수 카메라 시스템 자동 보정 알고리즘)

  • Kim, Ki-Young;Woo, Woon-Tack
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.495-506
    • /
    • 2006
  • In this paper, we propose a self-calibration of a multi-camera system using factorization techniques for realistic contents generation. The traditional self-calibration algorithms for multi-camera systems have been focused on stereo(-rig) camera systems or multiple camera systems with a fixed configuration. Thus, it is required to exploit them in 3D reconstruction with a mobile multi-camera system and another general applications. For those reasons, we suggest the robust algorithm for general structured multi-camera systems including the algorithm for a plane-structured multi-camera system. In our paper, we explain the theoretical background and practical usages based on a projective factorization and the proposed affine factorization. We show experimental results with simulated data and real images as well. The proposed algorithm can be used for a 3D reconstruction and a mobile Augmented Reality.

Multiple-Section Using 3D Spline based Cut-Scene Effect (3차 곡선을 이용한 다 구간 경로 기반의 컷씬 효과)

  • Sun, Bok-Gun;Shin, Young-Seo;Park, Sung-Jun
    • Journal of Korea Game Society
    • /
    • v.11 no.1
    • /
    • pp.93-100
    • /
    • 2011
  • Cinematic camera techniques are being increasingly applied to the game development these days. In this study, the object movement and camera effect for the game development using the curve in the 3D space were discussed. The Catmull-Rom spline algorithm follows the curve more closely than the other curve algorithms. With the algorithm proposed in this study, the Catmull-Rom spline was dynamically created according to the user's input in multiple sections in the 3D space, and objects smoothly passed along the route. In addition, Cut-Scene section is specified using the Catmull-Rom spline and the object movement can be observed. The results of the study on the accuracy and efficiency of the curve showed that the Catmull-Rom spline is very efficient not only for the object movement but also for the cinematic camera technique.

Real-Time Interested Pedestrian Detection and Tracking in Controllable Camera Environment (제어 가능한 카메라 환경에서 실시간 관심 보행자 검출 및 추적)

  • Lee, Byung-Sun;Rhee, Eun-Joo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.293-297
    • /
    • 2007
  • This thesis suggests a new algorithm to detects multiple moving objects using a CMODE(Correct Multiple Object DEtection) method in the color images acquired in real-time and to track the interested pedestrian using motion and hue information. The multiple objects are detected, and then shaking trees or moving cars are removed using structural characteristics and shape information of the man , the interested pedestrian can be detected, The first similarity judgment for tracking an interested pedestrian is to use the distance between the previous interested pedestrian's centroid and the present pedestrian's centroid. For the area where the first similarity is detected, three feature points are calculated using k-mean algorithm, and the second similarity is judged and tracked using the average hue value for the $3{\times}3$ area of each feature point. The zooming of camera is adjusted to track an interested pedestrian at a long distance easily and the FOV(Field of View) of camera is adjusted in case the pedestrian is not situated in the fixed range of the screen. As a experiment results, comparing the suggested CMODE method with the labeling method, an average approach rate is one fourth of labeling method, and an average detecting time is faster three times than labeling method. Even in a complex background, such as the areas where trees are shaking or cars are moving, or the area of shadows, interested pedestrian detection is showed a high detection rate of average 96.5%. The tracking of an interested pedestrian is showed high tracking rate of average 95% using the information of situation and hue, and interested pedestrian can be tracked successively through a camera FOV and zooming adjustment.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Position Estimation of Welding Panels for Sub-Assembly Welding Line in Shipbuilding using Camera Vision System (조선 소조립 용접자동화의 부재위치 인식을 위한 카메라 시각 시스템)

  • 전바롬;윤재웅;김재훈
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.3
    • /
    • pp.344-352
    • /
    • 1999
  • There has been requested to automate the welding process in shipyard due to its dependence on skilled operators and the inferior working environments. According to these demands, multiple robot welding system for sub-assembly welding line has been developed, realized and installed at Keoje shipyard. In order to realize automatic welding system, robots have to be equipped with a sensing system to recognize the position of the welding panels. In this research, a camera vision system(CVS) is developed to detect the position of base panels for sub-assembly line in shipbuilding. Two camera vision systems are used in two different stages (fitting and welding) to automate the recognition and positioning of welding lines. For automatic recognition of panel position, various image processing algorithms are proposed in this paper.

  • PDF

A Study on the Production Efficiency of Movie Filming Environment Using 360° VR (360VR을 활용한 영화촬영 환경을 위한 제작 효율성 연구)

  • Lee, Young-suk;Kim, Jungwhan
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.2036-2043
    • /
    • 2016
  • The $360^{\circ}$ Virtual Reality (VR) live-action movies are filmed by attaching multiple cameras to a rig to shoot the images omni-directionally. Especially, for a live-action film that requires a variety of scenes, the director of photography and his staff usually have to operate the rigged cameras directly all around the scene and edit the footage during the post-production stage so that the entire process can incur much time and high cost. However, it will also be possible to acquire high-quality omni-directional images with fewer staff if the camera rig(s) can be controlled remotely to allow more flexible camera walking. Thus, a $360^{\circ}$ VR filming system with remote-controlled camera rig has been proposed in this study. The movie producers will be able to produce the movies that provide greater immersion with this system.

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

Implementation of Adaptive Shading Correction System Supporting Multi-Resolution for Camera

  • Ha, Joo-Young;Song, Jin-Geun;Im, Jeong-Uk;Min, Kyoung-Joong;Kang, Bong-Soon
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.25-28
    • /
    • 2006
  • In this paper, we say the shading correction system supporting multi-resolution for camera. The shading effect is caused by non-uniform illumination, non-uniform camera sensitivity, or even dirt and dust on glass (lens) surfaces. In general this shading effect is undesirable [1]. Eliminating it is frequently necessary for subsequent processing and especially when quantitative microscopy is the fine goal. The proposed system is available on thirty nine kinds of image resolutions scanned by interlaced and progressive type. Moreover, the system is using various continuous quadratic equations instead of using the piece-wise linear curve which is composed of multiple line segments. Finally, the system could correct the correct effect without discontinuity in any image resolution. The proposed system is also experimentally demonstrated with Xilinx Virtex FPGA XCV2000E- 6BG5560 and the TV set.

  • PDF

Advanced surface spectral-reflectance estimation using a population with similar colors (유사색 모집단을 이용한 개선된 분광 반사율 추정)

  • 이철희;김태호;류명춘;오주환
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2001.05a
    • /
    • pp.280-287
    • /
    • 2001
  • The studies to estimate the surface spectral reflectance of an object have received widespread attention using the multi-spectral camera system. However, the multi-spectral camera system requires the additional color filter according to increment of the channel and system complexity is increased by multiple capture. Thus, this paper proposes an algorithm to reduce the estimation error of surface spectral reflectance with the conventional 3-band RGB camera. In the proposed method, adaptive principal components for each pixel are calculated by renewing the population of surface reflectances and the adaptive principal components can reduce estimation error of surface spectral reflectance of current pixel. To evacuate performance of the proposed estimation method, 3-band principal component analysis, 5-band wiener estimation method, and the proposed method are compared in the estimation experiment with the Macbeth ColorChecker. As a result, the proposed method showed a lower mean square ems between the estimated and the measured spectra compared to the conventional 3-band principal component analysis method and represented a similar or advanced estimation performance compared to the 5-band wiener method.

  • PDF