• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.026 seconds

Uncooled Microbolometer FPA Sensor with Wafer-Level Vacuum Packaging (웨이퍼 레벨 진공 패키징 비냉각형 마이크로볼로미터 열화상 센서 개발)

  • Ahn, Misook;Han, Yong-Hee
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.5
    • /
    • pp.300-305
    • /
    • 2018
  • The uncooled microbolometer thermal sensor for low cost and mass volume was designed to target the new infrared market that includes smart device, automotive, energy management, and so on. The microbolometer sensor features 80x60 pixels low-resolution format and enables the use of wafer-level vacuum packaging (WLVP) technology. Read-out IC (ROIC) implements infrared signal detection and offset correction for fixed pattern noise (FPN) using an internal digital to analog convertor (DAC) value control function. A reliable WLVP thermal sensor was obtained with the design of lid wafer, the formation of Au80%wtSn20% eutectic solder, outgassing control and wafer to wafer bonding condition. The measurement of thermal conductance enables us to inspect the internal atmosphere condition of WLVP microbolometer sensor. The difference between the measurement value and design one is $3.6{\times}10-9$ [W/K] which indicates that thermal loss is mainly on account of floating legs. The mean time to failure (MTTF) of a WLVP thermal sensor is estimated to be about 10.2 years with a confidence level of 95 %. Reliability tests such as high temperature/low temperature, bump, vibration, etc. were also conducted. Devices were found to work properly after accelerated stress tests. A thermal camera with visible camera was developed. The thermal camera is available for non-contact temperature measurement providing an image that merged the thermal image and the visible image.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

Real-time Zoom Tracking for DM36x-based IP Network Camera

  • Cong, Bui Duy;Seol, Tae In;Chung, Sun-Tae;Kang, HoSeok;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1261-1271
    • /
    • 2013
  • Zoom tracking involves the automatic adjustment of the focus motor in response to the zoom motor movements for the purpose of keeping an object of interest in focus, and is typically achieved by moving the zoom and focus motors in a zoom lens module so as to follow the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. Thus, one can simply implement zoom tracking by following the most closest trace curve after all the trace curve data are stored in memory. However, this approach is often prohibitive in practical implementation because of its large memory requirement. Many other zoom tracking methods such as GZT, AZT and etc. have been proposed to avoid large memory requirement but with a deteriorated performance. In this paper, we propose a new zoom tracking method called 'Approximate Feedback Zoom Tracking method (AFZT)' on DM36x-based IP network camera, which does not need large memory by approximating nearby trace curves, but generates better zoom tracking accuracy than GZT or AZT by utilizing focus value as feedback information. Experiments through real implementation shows the proposed zoom tracking method improves the tracking performance and works in real-time.

Development of Road-Following Controller for Autonomous Vehicle using Relative Similarity Modular Network (상대분할 신경회로망에 의한 자율주행차량 도로추적 제어기의 개발)

  • Ryoo, Young-Jae;Lim, Young-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.550-557
    • /
    • 1999
  • This paper describes a road-following controller using the proposed neural network for autonomous vehicle. Road-following with visual sensor like camera requires intelligent control algorithm because analysis of relation from road image to steering control is complex. The proposed neural network, relative similarity modular network(RSMN), is composed of some learning networks and a partitioniing network. The partitioning network divides input space into multiple sections by similarity of input data. Because divided section has simlar input patterns, RSMN can learn nonlinear relation such as road-following with visual control easily. Visual control uses two criteria on road image from camera; one is position of vanishing point of road, the other is slope of vanishing line of road. The controller using neural network has input of two criteria and output of steering angle. To confirm performance of the proposed neural network controller, a software is developed to simulate vehicle dynamics, camera image generation, visual control, and road-following. Also, prototype autonomous electric vehicle is developed, and usefulness of the controller is verified by physical driving test.

  • PDF

A Design of Real-time Automatic Focusing System for Digital Still Camera Using the Passive Sensor Error Minimization (수동 센서의 오차 최소화를 이용한 실시간 DSC 자동초점 시스템 설계)

  • Kim, Geun-Seop;Kim, Deok-Yeong;Kim, Seong-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.5
    • /
    • pp.203-211
    • /
    • 2002
  • In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In audition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image Processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds.

A Real-time Detection Method for the Driving Direction Points of a Low Speed Processor (저 사양 프로세서를 위한 실시간 주행 방향점 검출 기법)

  • Hong, Yeonggi;Park, Jungkil;Lee, Sungmin;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.9
    • /
    • pp.950-956
    • /
    • 2014
  • In this paper, the real-time detection method of a DDP (Driving Direction Point) is proposed for an unmanned vehicle to safely follow the center of the road. Since the DDP is defined as a center point between two lanes, the lane is first detected using a web camera. For robust detection of the lane, the binary thresholding and the labeling methods are applied to the color camera image as image preprocessing. From the preprocessed image, the lane is detected, taking the intrinsic characteristics of the lane such as width into consideration. If both lanes are detected, the DDP can be directly obtained from the preprocessed image. However, if one lane is detected, the DDP is obtained from the inverse perspective image to guarantee reliability. To verify the proposed method, several experiments to detect the DDPs are carried out using a 4 wheeled vehicle ERP-42 with a web camera.

A Model Plane Photographing System and Information Collection for Facilities (모형비행기를 이용한 항공사진촬영과 시설물 정보의 수집)

  • 김병국;유동훈
    • Spatial Information Research
    • /
    • v.6 no.1
    • /
    • pp.1-10
    • /
    • 1998
  • The need of aerial photographs is increasing for small area development such as facility management, site planning, residence planning, and so on. It is not an easy task, however, to take aerial photographs using an aircraft for metric photogrammetry because of the strict regulatins of flying and also photographing in Korea, as well as the cost. As one of efficient methods to take large-scale aerial photographs, we investigated the ways of photographing by a remote controlled model plane(RC plane) with a light weight non-metric camera on board. We had examined the principles of RC planes and assembled a RC plane, And test photographing was performed. Even though we obtained reasonably good stereo-pairs for the grounds and facilities using the RC plane, we found there were yet many problems to be solved, such as difficulties of RC plane control, camera focusing, and accumulation of dust on the camera lens.

  • PDF

Focal Reducer for CQUEAN

  • Lim, Ju-Hee;Chang, Seung-Hyuk;Kim, Young-Ju;Kim, Jin-Young;Park, Won-Kee;Im, Myung-Shin;Pak, Soo-Jong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.2
    • /
    • pp.62.2-62.2
    • /
    • 2010
  • The CQUEAN (Camera for QUasars in EArly uNiverse) is an optical CCD camera optimized for the observation of high redshift QSOs to understand the nature of early universe. The focal reducer, which is composed of four spherical lens, is allowed to secure a wider field of view for CQUEAN, by reducing the focal length of the system by one third. We designed the lens configuration, the lens barrel, and the adapters to assemble to attach focal reducer to the CCD camera system. We performed tolerance analysis using ZEMAX. The manufacturing of the focal reducer system and its lab test of optical performance were already finished. It turned out that the performance can meet the original requirement, with the aberration and alignment error taken into account. We successfully attached the focal reducer and CQUEAN to the cassegrain focus of 2.1m telescope at McDonald Observatory, USA, and several tests of CQUEAN system were carried out. In this presentation, I will show the process of focal reducer fabrication and the result of performance test.

  • PDF

Thee contour extraction algorithm of the moving Object using the CCD camera (CCD Camera를 이용한 이동체의 궤적 추출 알고리즘)

  • Lim Cheong;Kim Yong-Deak
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.81-86
    • /
    • 2005
  • It is not easy to find and extract a moving object from its background. The extraction method is specific as what it is and how its environment is. So recently the more general method which is less affected by its environmental elements is required. So, In this paper we report on the moving object extraction algorithm using the features of the interlaced-image-capturing method which is adopted in the CCD Camera, an afterimage for exposing time and the fact that an afterimage has same color level. Unlike much of existing algorithms it is use oかy one stationary picture to apply this algorithm.

Gamma Camera Based FDG PET in Oncology

  • Park, Chan-Hui
    • 대한핵의학회:학술대회논문집
    • /
    • 2002.05a
    • /
    • pp.45-53
    • /
    • 2002
  • Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories : conventional(c) and gamma camera $based_{(CB)}$ PET. $_{CB}PET$ is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more $_{CB}PET$ in operation than cPET in the USA. $_{CB}PET$ is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Ours was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The fellowing is a brief description of our clinical experience of FDG CBPET in oncology.

  • PDF