• Title/Summary/Keyword: Multiple camera

Search Result 531, Processing Time 0.028 seconds

Camera Imaging Lens Fabrication using Wafer-Scale UV Embossing Process

  • Jeong, Ho-Seop;Kim, Sung-Hwa;Shin, Dong-Ik;Lee, Seok-Cheon;Jin, Young-Su;Noh, Jung-Eun;Oh, Hye-Ran;Lee, Ki-Un;Song, Seok-Ho;Park, Woo-Je
    • Journal of the Optical Society of Korea
    • /
    • v.10 no.3
    • /
    • pp.124-129
    • /
    • 2006
  • We have developed a compact and cost-effective camera module on the basis of wafer-scale-replica processing. A multiple-layered structure of several aspheric lenses in a mobile-phone camera module is first assembled by bonding multiple glass-wafers on which 2-dimensional replica arrays of identical aspheric lenses are UV-embossed, followed by dicing the stacked wafers and packaging them with image sensor chips. This wafer-scale processing leads to at least 95% yield in mass-production, and potentially to a very slim phone with camera-module less than 2 mm in thickness. We have demonstrated a VGA camera module fabricated by the wafer-scale-replica processing with various UV-curable polymers having refractive indices between 1.4 and 1.6, and with three different glass-wafers of which both surfaces are embossed as aspheric lenses having $230{\mu}m$ sag-height and aspheric-coefficients of lens polynomials up to tenth-order. We have found that precise compensation in material shrinkage of the polymer materials is one of the most technical challenges, in orderto achieve a higher resolution in wafer-scaled lenses for mobile-phone camera modules.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Camera Tracking Method based on Model with Multiple Planes (다수의 평면을 가지는 모델기반 카메라 추적방법)

  • Lee, In-Pyo;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.11 no.4
    • /
    • pp.143-149
    • /
    • 2011
  • This paper presents a novel camera tracking method based on model with multiple planes. The proposed algorithm detects QR code that is one of the most popular types of two-dimensional barcodes. A 3D model is imported from the detected QR code for augmented reality application. Based on the geometric property of the model, the vertices are detected and tracked using optical flow. A clipping algorithm is applied to identify each plane from model surfaces. The proposed method estimates the homography from coplanar feature correspondences, which is used to obtain the initial camera motion parameters. After deriving a linear equation from many feature points on the model and their 3D information, we employ DLT(Direct Linear Transform) to compute camera information. In the final step, the error of camera poses in every frame are minimized with local Bundle Adjustment algorithm in real-time.

Development of an intelligent camera for multiple body temperature detection (다중 체온 감지용 지능형 카메라 개발)

  • Lee, Su-In;Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.430-436
    • /
    • 2022
  • In this paper, we propose an intelligent camera for multiple body temperature detection. The proposed camera is composed of optical(4056*3040) and thermal(640*480), which detects abnormal symptoms by analyzing a person's facial expression and body temperature from the acquired image. The optical and thermal imaging cameras are operated simultaneously and detect an object in the optical image, in which the facial region and expression analysis are calculated from the object. Additionally, the calculated coordinate values from the optical image facial region are applied to the thermal image, also the maximum temperature is measured from the region and displayed on the screen. Abnormal symptom detection is determined by using the analyzed three facial expressions(neutral, happy, sadness) and body temperature values. In order to evaluate the performance of the proposed camera, the optical image processing part is tested on Caltech, WIDER FACE, and CK+ datasets for three algorithms(object detection, facial region detection, and expression analysis). Experimental results have shown 91%, 91%, and 84% accuracy scores each.

Position estimation of welding panels for sub-assembly welding line in shipbuilding using camera vision system (조선 소조립 용접자동화의 부재위치 인식을 위한 camera vision system)

  • 전바롬;윤재웅;고국원;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.361-364
    • /
    • 1997
  • There has been requested to automate the welding process in shipyard due to its dependence on skilled operators and the inferior working environments. According to these demands, multiple robot welding system for sub-assembly welding line has been developed, realized and installed at Keoje Shipyard. In order to realize automatic welding system, robots have to be equipped with the sensing system to recognize the position of the welding panels. In this research, a camera vision system is developed to detect the position of base panels for subassembly line in shipbuilding. Two camera vision systems are used in two different stages (Mounting and Welding) to automate the recognition and positioning of welding lines. For automatic recognition of panel position, various image processing algorithms are proposed in this paper.

  • PDF

A 3D Foot Scanner Using Mirrors and Single Camera (거울 및 단일 카메라를 이용한 3차원 발 스캐너)

  • Chung, Seong-Youb;Park, Sang-Kun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.1
    • /
    • pp.11-20
    • /
    • 2011
  • A structured beam laser is often used to scan object and make 3D model. Multiple cameras are inevitable to see occluded areas, which is the main reason of the high price of the scanner. In this paper, a low cost 3D foot scanner is developed using one camera and two mirrors. The camera and two mirrors are located below and above the foot, respectively. Occluded area, which is the top of the foot, is reflected by the mirrors. Then the camera measures 3D point data of the bottom and top of the foot at the same time. Then, the whole foot model is reconstructed after symmetrical transformation of the data reflected by mirrors. The reliability of the scan data depends on the accuracy of the parameters between the camera and the laser. A calibration method is also proposed and verified by experiments. The results of the experiments show that the worst errors of the system are 2 mm along x, y, and z directions.

Intelligent Composition of CG and Dynamic Scene (CG와 동영상의 지적합성)

  • 박종일;정경훈;박경세;송재극
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1995.06a
    • /
    • pp.77-81
    • /
    • 1995
  • Video composition is to integrate multiple image materials into one scene. It considerably enhances the degree of freedom in producing various scenes. However, we need to adjust the viewing point sand the image planes of image planes of image materials for high quality video composition. In this paper, were propose an intelligent video composition technique concentrating on the composition of CG and real scene. We first model the camera system. The projection is assumed to be perspective and the camera motion is assumed to be 3D rotational and 3D translational. Then, we automatically extract camera parameters comprising the camera model from real scene by a dedicated algorithm. After that, CG scene is generated according to the camera parameters of the real scene. Finally the two are composed into one scene. Experimental results justify the validity of the proposed method.

Development of Multi-Camera based Mobile Mapping System for HD Map Production (정밀지도 구축을 위한 다중카메라기반 모바일매핑시스템 개발)

  • Hong, Ju Seok;Shin, Jin Soo;Shin, Dae Man
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.587-598
    • /
    • 2021
  • This study aims to develop a multi-camera based MMS (Mobile Mapping System) technology for building a HD (High Definition) map for autonomous driving and for quick update. To replace expensive lidar sensors and reduce long processing times, we intend to develop a low-cost and efficient MMS by applying multiple cameras and real-time data pre-processing. To this end, multi-camera storage technology development, multi-camera time synchronization technology development, and MMS prototype development were performed. We developed a storage module for real-time JPG compression of high-speed images acquired from multiple cameras, and developed an event signal and GNSS (Global Navigation Satellite System) time server-based synchronization method to record the exposure time multiple images taken in real time. And based on the requirements of each sector, MMS was designed and prototypes were produced. Finally, to verify the performance of the manufactured multi-camera-based MMS, data were acquired from an actual 1,000 km road and quantitative evaluation was performed. As a result of the evaluation, the time synchronization performance was less than 1/1000 second, and the position accuracy of the point cloud obtained through SFM (Structure from Motion) image processing was around 5 cm. Through the evaluation results, it was found that the multi-camera based MMS technology developed in this study showed the performance that satisfies the criteria for building a HD map.

Real-Time Augmented Reality on 3-D Mobile Display using Stereo Camera Tracking (스테레오 카메라 추적을 이용한 모바일 3차원 디스플레이 상의 실시간 증강현실)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.362-371
    • /
    • 2013
  • This paper presents a framework of real-time augmented reality on 3-D mobile display with stereo camera tracking. In the framework, camera poses are jointly estimated with the geometric relationship between stereoscopic images, which is based on model-based tracking. With the estimated camera poses, the virtual contents are correctly augmented on stereoscopic images through image rectification. For real-time performance, stereo camera tracking and image rectification are efficiently performed using multiple threads. Image rectification and color conversion are accelerated with a GPU processing. The proposed framework is tested and demonstrated on a commercial smartphone, which is equipped with a stereoscopic camera and a parallax barrier 3-D display.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF