• 제목/요약/키워드: camera image

Search Result 4,917, Processing Time 0.03 seconds

A Wafer Pre-Alignment System Using One Image of a Whole Wafer (하나의 웨이퍼 전체 영상을 이용한 웨이퍼 Pre-Alignment 시스템)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.3
    • /
    • pp.47-51
    • /
    • 2010
  • This paper presents a wafer pre-alignment system which is improved using the image of the entire wafer area. In the previous method, image acquisition for wafer takes about 80% of total pre-alignment time. The proposed system uses only one image of entire wafer area via a high-resolution CMOS camera, and so image acquisition accounts for nearly 1% of total process time. The larger FOV(field of view) to use the image of the entire wafer area worsen camera lens distortion. A camera calibration using high order polynomials is used for accurate lens distortion correction. And template matching is used to find a correct notch's position. The performance of the proposed system was demonstrated by experiments of wafer center alignment and notch alignment.

Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation (열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템)

  • Seunghyeon Lee;Taejoo Kim;Yukyung Choi
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

Verification of Camera-Image-Based Target-Tracking Algorithm for Mobile Surveillance Robot Using Virtual Simulation (가상 시뮬레이션을 이용한 기동형 경계 로봇의 영상 기반 목표추적 알고리즘 검증)

  • Lee, Dong-Youm;Seo, Bong-Cheol;Kim, Sung-Soo;Park, Sung-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.11
    • /
    • pp.1463-1471
    • /
    • 2012
  • In this study, a 3-axis camera system design is proposed for application to an existing 2-axis surveillance robot. A camera-image-based target-tracking algorithm for this robot has also been proposed. The algorithm has been validated using a virtual simulation. In the algorithm, the heading direction vector of the camera system in the mobile surveillance robot is obtained by the position error between the center of the view finder and the center of the object in the camera image. By using the heading direction vector of the camera system, the desired pan and tilt angles for target-tracking and the desired roll angle for the stabilization of the camera image are obtained through inverse kinematics. The algorithm has been validated using a virtual simulation model based on MATLAB and ADAMS by checking the corresponding movement of the robot to the target motion and the virtual image error of the view finder.

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

Vision-based Camera Localization using DEM and Mountain Image (DEM과 산영상을 이용한 비전기반 카메라 위치인식)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.177-186
    • /
    • 2005
  • In this Paper. we propose vision-based camera localization technique using 3D information which is created by mapping of DEM and mountain image. Typically, image features for localization have drawbacks, it is variable to camera viewpoint and after time information quantify increases . In this paper, we extract invariance features of geometry which is irrelevant to camera viewpoint and estimate camera extrinsic Parameter through accurate corresponding Points matching by Proposed similarity evaluation function and Graham search method we also propose 3D information creation method by using graphic theory and visual clues, The Proposed method has the three following stages; point features invariance vector extraction, 3D information creation, camera extrinsic Parameter estimation. In the experiments, we compare and analyse the proposed method with existing methods to demonstrate the superiority of the proposed methods.

  • PDF

Real-Time Augmented Reality on 3-D Mobile Display using Stereo Camera Tracking (스테레오 카메라 추적을 이용한 모바일 3차원 디스플레이 상의 실시간 증강현실)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.362-371
    • /
    • 2013
  • This paper presents a framework of real-time augmented reality on 3-D mobile display with stereo camera tracking. In the framework, camera poses are jointly estimated with the geometric relationship between stereoscopic images, which is based on model-based tracking. With the estimated camera poses, the virtual contents are correctly augmented on stereoscopic images through image rectification. For real-time performance, stereo camera tracking and image rectification are efficiently performed using multiple threads. Image rectification and color conversion are accelerated with a GPU processing. The proposed framework is tested and demonstrated on a commercial smartphone, which is equipped with a stereoscopic camera and a parallax barrier 3-D display.

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.

Distortion Corrected Black and White Document Image Generation Based on Camera (카메라기반의 왜곡이 보정된 흑백 문서 영상 생성)

  • Kim, Jin-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.11
    • /
    • pp.18-26
    • /
    • 2015
  • Geometric distortion and shadow effect due to capturing angle could be included in document copy images that are captured by a camera in stead of a scanner. In this paper, a clean black and white document image generation algorithm by distortion correction and shadow elimination based on a camera, is proposed. In order to correct geometric distortion such as straightening un-straight boundary lines occurred by camera lens radial distortion and eliminating outlying area included by camera direction, second derivative filter based document boundary detection method is developed. Black and white images have been generated by adaptive binarization method by eliminating shadow effect. Experimental results of the black and white document image generation algorithm by recovering geometrical distortion and eliminating shadow effect for the document images captured by smart phone camera, shows very good processing results.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Moving Object Detection with Rotating Camera Based on Edge Segment Matching (이동카메라 환경에서의 에지 세그먼트 정합을 통한 이동물체 검출)

  • Lee, June-Hyung;Chae, Ok-Sam
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.1-12
    • /
    • 2008
  • This paper presents automatic moving object detection method using the rotating camera covering larger area with a single camera. The proposed method is based on the edge segment matching which robust to the dynamic environment with illumination change and background movement. The proposed algorithm presents an edge segment based background panorama image generation method minimizing the distortion due to image stitching, the background image generation method using Generalized Hough Transformation which can reliably register the current image to the panorama image overcoming the stitching distortions, the moving edge segment extraction method that overcome viewpoint difference and distortion. The experimental results show that the proposed method can detect correctly moving object under illumination change and camera vibration.

  • PDF