• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.029 seconds

Effect of Die Bonding Epoxy on the Warpage and Optical Performance of Mobile Phone Camera Packages (모바일 폰 카메라 패키지의 다이 본딩 에폭시가 Warpage와 광학성능에 미치는 영향 분석)

  • Son, Sukwoo;Kihm, Hagyong;Yang, Ho Soon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.4
    • /
    • pp.1-9
    • /
    • 2016
  • The warpage on mobile phone camera packages occurs due to the CTE(Coefficient of Thermal Expansion) mismatch between a thin silicon die and a substrate. The warpage in the optical instruments such as camera module has an effect on the field curvature, which is one of the factors degrading the optical performance and the product yield. In this paper, we studied the effect of die bonding epoxy on the package and optical performance of mobile phone camera packages. We calculated the warpages of camera module packages by using a finite element analysis, and their shapes were in good agreement showing parabolic curvature. We also measured the warpages and through-focus MTF of camera module specimens with experiments. The warpage was improved on an epoxy with low elastic modulus at both finite element analysis and experiment results, and the MTF performance increased accordingly. The results show that die bonding epoxy affects the warpage generated on the image sensor during the packaging process, and this warpage eventually affects the optical performance associated with the field curvature.

Camera Identification of DIBR-based Stereoscopic Image using Sensor Pattern Noise (센서패턴잡음을 이용한 DIBR 기반 입체영상의 카메라 판별)

  • Lee, Jun-Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.1
    • /
    • pp.66-75
    • /
    • 2016
  • Stereoscopic image generated by depth image-based rendering(DIBR) for surveillance robot and camera is appropriate in a low bandwidth network. The image is very important data for the decision-making of a commander and thus its integrity has to be guaranteed. One of the methods used to detect manipulation is to check if the stereoscopic image is taken from the original camera. Sensor pattern noise(SPN) used widely for camera identification cannot be directly applied to a stereoscopic image due to the stereo warping in DIBR. To solve this problem, we find out a shifted object in the stereoscopic image and relocate the object to its orignal location in the center image. Then the similarity between SPNs extracted from the stereoscopic image and the original camera is measured only for the object area. Thus we can determine the source of the camera that was used.

A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix (필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Kim, Hogyeom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.143-148
    • /
    • 2022
  • It is very important for metaverse, mobile robot, and user location services to analyze the images continuously taken using a mobile smartphone or robot's monocular camera to estimate the camera's location. So far, PnP-related techniques have been applied to calculate the position. In this paper, the camera's moving direction is obtained using the essential matrix in the epipolar geometry applied to successive images, and the camera's continuous moving position is calculated through geometrical equations. A new estimation method was proposed, and its accuracy was verified through simulation. This method is completely different from the existing method and has a feature that it can be applied even if there is only one or more matching feature points in two or more images.

Study on Effective Visual Surveillance System using Dual-mode(Fixed+Pan/Tilt/Zoom) Camera (듀얼 모드(고정형+PTZ 카메라) 감시 카메라를 이용한 효과적인 화상 감시 시스템에 관한 연구)

  • Kim, Gi-Seok;Lee, Saac;Park, Jong-Seop;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.7
    • /
    • pp.650-657
    • /
    • 2012
  • An effective dual-mode camera system(a passive wide-angle camera and a pan-tilt-zoom camera) is proposed in order to improve the performance of visual surveillance. The fixed wide-angle camera is used to monitor large open areas, but the moving objects on the images are too small to view in detail. And, the PTZ camera is capable of increasing the monitoring area and enhancing the image quality by tracking and zooming in on a specific moving target. However, its FOV (Field of View) is limited when zooming in on a specific target. Therefore, the cooperation of wide-angle and PTZ cameras is complementary. In this paper, we propose an automatic initial set-up algorithm and coordinate transform method from the wide-angle camera coordinate to the PTZ one, which are necessary to achieve the cooperation. The automatic initial set-up algorithm is able to synchronize the views of two cameras. When a moving object appears on the image plane of a wide-angle camera after the initial set-up positioning, the obtained values of the wide-angle camera should be transformed to the PTZ values based on the coordinate transform method. We also develope the PTZ control method. Various in-door and out-door experiments show that the proposed dual-camera system is feasible for the effective visual surveillance.

The estimation of camera calibration parameters using the properties of vanishing point at the paved and unpaved road (무한원점의 성질을 이용한 포장 및 비포장 도로에서의 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Jeong, Myeong-Hee;Rho, Do-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.178-180
    • /
    • 2006
  • In general, camera calibration has to be gone ahead necessarily to estimate a position and an orientation of the object exactly using a camera. Autonomous land system in order to run a vehicle autonomously needs a camera calibration method appling a camera and various road environment. Camera calibration is to prescribe the confrontation relation between third dimension space and the image plane. It means to find camera calibration parameters. Camera calibration parameters using the paved road and the unpaved road are estimated. The proposed algorithm has been detected through the image processing after obtaining the paved road and the unpaved road. There is able to detect easily edges because the road lanes exist in the raved road. Image processing method is two. One is a method on the paved road. Image is segmentalized using open, dilation, and erosion. The other is a method on the unpaved road. Edges are detected using blur and sharpening. So it has been made use of Hough transformation in order to detect the correct straight line because it has less error than least-square method. In addition to, this thesis has been used vanishing point' principle. an algorithm suggests camera calibration method using Hough transformation and vanishing point. When the algorithm was applied, the result of focal length was about 10.7[mm] and RMS errors of rotation were 0.10913 and 0.11476 ranges. these have the stabilized ranges comparatively. This shows that this algorithm can be applied to camera calibration on the raved and unpaved road.

  • PDF

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.191-204
    • /
    • 2014
  • In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.

A Calibration Algorithm Using Known Angle (각도 정보를 이용한 카메라 보정 알고리듬)

  • 권인소;하종은
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.5
    • /
    • pp.415-420
    • /
    • 2004
  • We present a new algorithm for the calibration of a camera and the recovery of 3D scene structure up to a scale from image sequences using known angles between lines in the scene. Traditional method for calibration using scene constraints requires various scene constraints due to the stratified approach. Proposed method requires only one type of scene constraint of known angle and also it directly recovers metric structure up to an unknown scale from projective structure. Specifically, we recover the matrix that is the homography between the projective structure and the Euclidean structure using angles. Since this matrix is a unique one in the given set of image sequences, we can easily deal with the problem of varying intrinsic parameters of the camera. Experimental results on the synthetic and real images demonstrate the feasibility of the proposed algorithm.

Depth error correction for maladjusted stereo cameras with the calibrated pixel distance parameter (화소간격 파라미터 교정에 의한 비정렬 스테레오 카메라의 거리오차 보정)

  • 김종만;손홍락;김성중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.268-272
    • /
    • 1996
  • Error correction effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The camera calibration is a necessary procedure for stereo vision-based depth computation. Intra and extra parameters should be obtain to determine the relation between image and world coordination through experiment. One difficulty is in camera alignment for parallel installation: placing two CCD arrays in a plane. No effective methods for such alignment have been presented before. Some amount of depth error caused from such non-parallel installation of cameras is inevitable. If the pixel distance parameter which is one of intra parameter is calibrated with known points, such error can be compensated in some amount. Such error compensation effect with the calibrated pixel distance parameter is demonstrated with some experimental results.

  • PDF

A Wafer Pre-Alignment System Using One Image of a Whole Wafer (하나의 웨이퍼 전체 영상을 이용한 웨이퍼 Pre-Alignment 시스템)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.3
    • /
    • pp.47-51
    • /
    • 2010
  • This paper presents a wafer pre-alignment system which is improved using the image of the entire wafer area. In the previous method, image acquisition for wafer takes about 80% of total pre-alignment time. The proposed system uses only one image of entire wafer area via a high-resolution CMOS camera, and so image acquisition accounts for nearly 1% of total process time. The larger FOV(field of view) to use the image of the entire wafer area worsen camera lens distortion. A camera calibration using high order polynomials is used for accurate lens distortion correction. And template matching is used to find a correct notch's position. The performance of the proposed system was demonstrated by experiments of wafer center alignment and notch alignment.

Analysis of Determinants Influencing User Satisfaction for Augmented Reality(AR) Camera Application: Focusing on Naver's Service (증강현실 기반 카메라 애플리케이션 서비스 만족도 영향 요인들에 대한 고찰: 네이버 <스노우> 서비스를 중심으로)

  • Lee, Sungjoon
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.417-428
    • /
    • 2020
  • Augmented reality(AR) service as one of the representative technologies in the era of the 4th industrial revolution has gotten people's increasing attention. At this time, this study aimed to examine the determinants affecting service satisfaction of the AR-based camera application service by focusing on Naver's service. Several factors that might have influences on satisfaction of the AR-based camera application service were conceived based on the previous literature. And it was tested whether they had influences on satisfaction of the AR-based camera application service or not empirically. The responses of a sample of 312 people who had experienced using Naver's service were collected by means of an online survey and they were analyzed employing a hierarchical regression analysis. The results of this research indicated that gender as one of demographic variable, perceived interactivity, aesthetic value, trend following motivation as an extrinsic motivation and playfulness as an intrinsic motivation have significant influences on the service satisfaction of the AR-based camera application service. The results in this study have meanings since they can be used as references for AR service providers to develop more satisfactory AR services.