• Title/Summary/Keyword: Frame Camera

Search Result 613, Processing Time 0.04 seconds

C.C.D Camera를 이용한 RHEED Intensity Oscillation 측정

  • 김재훈;민항기;김재성
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 1994.02a
    • /
    • pp.122-123
    • /
    • 1994
  • RHEED ppattern을 C.C.D Camera를 이용하여 관측한 후 C.C.D outpput signal을 Frame Garbber를 이용하여 Digitize하였다. Digitize된 RHEED ppattern의 정보로부터 원하는 Sppot의 intensity를 Image pprocessing Software를 개발하여 측정할 수 있다. [그림 1] 특히 thin film growth를 monitor하기 위하여 RHEED diffraction Sppot의 Intensity oscillation을 측정할 경우 실시간 측정이 필요하며 이를 위해 매우 빠른 속도의 data aquisition과 dispplay를 필요로 한다. 그림2는 이런 조건을 만족하는 software를 개발하여 실시간으로 측정한 AlGaAs/GaAs. multilayer RHEED Oscillation을 보여주고 있다. 이 실험은 매우 간단하고, 특별한 주의를 요하지 않으며, 측정되는 RHEED Sppot을 눈으로 동시에 관찰할 수 있어 실험 상황을 좀더 쉽게 monitor할 수 있게 해준다. 또한 data-aquitition. data-analysis, data-dispplay를 한 대의 compputer를 이용해 손쉽고 값싸게 할 수 있으므로, opptical fiber와 pphoto-diode, X-Y recorder등을 동원한 기존의 번잡한 실험을 대체할 수 있을 것이다.

  • PDF

Simultaneous Measurement of Velocity and Concentration Field in a Stirred Mixer Using PIV/LIF Technique (PIV/LIF기법에 의한 교반혼합기 내의 속도장과 농도장 동시 측정)

  • Jeong, Eun-Ho;Yoon, Sang-Youl;Kim, Kyung-Chun
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.27 no.4
    • /
    • pp.504-510
    • /
    • 2003
  • Simultaneous measurements of turbulent velocity and concentration field in a stirred mixer tank are carried out by using PIV/LIF technique. Instantaneous velocity fields are measured with a 1K$\times$1K CCD camera adopting the frame straddle method while the concentration fields are obtained by measuring the fluorescence intensity of Rhodamine B tracer excited by the second pulse of Nd:Yag laser light. Image distortion due to the camera view-angle is compensated by a mapping function. It is found that the general features of the mixing pattern are quite dependent on the local flow characteristics during the rapid decay of mean concentration. However, the small scale mixing seems to be independent on the local turbulent velocity fluctuation.

Designed of High-Speed Camera Using FPGA (FPGA를 이용한 고속카메라 시스템 구현)

  • Park, Sei-Hun;Shin, Yun-Soo;Oh, Tea-Seok;Kim, Il-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1935-1936
    • /
    • 2008
  • 본 논문은 High speed image를 획득하기 위하여 CMOS Image Sensor를 사용한 고속카메라 구현에 관한 연구이다. Image Sensor로는 CCD Image Sensor와 CMOS Image Sensor가 있으며 CMOS Image Sensor는 CCD Image Sensor에 비해 전력소모가 적고 주변회로의 내장으로 소형화 할 수 있는 장점이 있다. 고속카메라는 충돌 테스트, 에어벡 제어 등의 자동차 분야와 골프 자세 교정 장치와 같은 스포츠 분야, 탄도 방향 측정 장비 등의 국방 분야 등 여러 분야에 많이 사용되고 있다. 본 논문에서 구현한 고속카메라 시스템은 CMOS Image Sensor를 사용하여 1280 * 1024의 해상도로 초당 약 500 frames의 영상을 획득할 수 있다. 또한 CMOS Image Sensor를 제어하고 획득한 이미지를 저장할 수 있도록 FPGA와 DDR2 메모리를 사용하고 저장된 데이터를 PC로 전송하기 위한 Camera Link 모듈 그리고 PC에서 카메라를 제어할 수 있도록 RS-422 통신기능 등으로 구성되었다.

  • PDF

Realtime Implementation Method for Perspective Distortion Correction (원근 왜곡 보정의 실시간 구현 방법)

  • Lee, Dong-Seok;Kim, Nam-Gyu;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.4
    • /
    • pp.606-613
    • /
    • 2017
  • When the planar area is captured by the depth camera, the shape of the plane in the captured image has perspective projection distortion according to the position of the camera. We can correct the distorted image by the depth information in the plane in the captured area. Previous depth information based perspective distortion correction methods fail to satisfy the real-time property due to a large amount of computation. In this paper, we propose the method of applying the conversion table selectively by measuring the motion of the plane and performing the correction process by parallel processing for correcting perspective projection distortion. By appling the proposed method, the system for correcting perspective projection distortion correct the distorted image, whose resolution is 640x480, as 22.52ms per frame, so the proposed system satisfies the real-time property.

Automation for Oyster Hinge Breaking System

  • So, J.D.;Wheaton, F.W.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.658-667
    • /
    • 1996
  • A computer vision system was developed to automatically detect and locate the oyster hinge line, one step in shucking an oyster. The computer vision system consisted of a personal computer, a color frame grabber, a color CCD video camera with a zoom lens, two video monitor, a specially designed fixture to hold the oyster, a lighting system to illuminate the oyster and the system software. The software consisted of a combination of commercially available programs and custom designed programs developed using the Microsoft CTM . Test results showed that the image resolution was the most important variable influencing hinge detection efficiency. Whether or not the trimmed -off-flat-white surface area was dry or wet, the oyster size relative to the image size selected , and the image processing methods used all influenced the hinge locating efficiency. The best computer software and hardware combination used successfully located 97% of the oyster hinge lines tested. This efficienc was achieve using camera field of view of 1.9 by 1.5cm , a 180 by 170 pixel image window, and a dry trimmed -off oyster hinge end surface.

  • PDF

A Study for AGV Steering Control using Evolution Strategy (진화전략 알고리즘을 이용한 AGV 조향제어에 관한 연구)

  • 이진우;손주한;최성욱;이영진;이권순
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.149-149
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

The Position Tracking Algorithm of Moving Viewer's Two-Eyes (움직이는 관찰자의 두 눈 위치 검출 알고리즘)

  • Huh, Kyung-Moo;Park, Young-Bin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.7
    • /
    • pp.544-550
    • /
    • 2000
  • Among the several types of 3D display methods the autostereoscopic method has an advantage that we can enjoy a 3D image without any additional device but the method has a disadvantage of a narrow viewing zone so that the moving viewer coannot see the 3D image continuously. This disadvantage can be overcome with the detectioni of viewer's positional movement by head tracking. In this paper we suggest a method of detecting the position of the moving viewer's two eyes by using images obtained through a color CCD camera, The suggested method consists of the preprocessing process and the eye-detection process. Through the experiment of applying the suggested method we were able to find the accurate two-eyes position for 78 images among 80 sample input images of 8 different men with the processing speed of 0.39 second/frame using a personal computer.

  • PDF

Simultaneous Measurement of Velocity and Concentration Field in a Stirred Mixer Using PIV/LIF Techniqueut and POD Analysis (PIV/LIF에 의한 교반혼합기 유동의 난류 속도/농도장 측정 및 POD해석)

  • Jeong Eun-Ho;Yoon Sang-Youl;Kim Kyung-Chun
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2002.11a
    • /
    • pp.101-104
    • /
    • 2002
  • Simultaneous measurement of turbulent velocity and concentration field in a stirred mixer tank is carried out by using PIV/LIF technique. Instantaneous velocity fields are measured by a $1K\times1K$ CCD camera adopting the frame straddle method while the concentration fields are obtained by measuring the fluorescence intensity of Rhodamine B tracer excited by the second pulse of Nd:Yag laser light. Image distortion due to the camera view-angle is compensated by a mapping function. It is found that the general features of the mixing pattern are quite dependent on the local flow characteristics during the rapid decay of mean concentration. However, the small scale mixing seems to be independent on the local turbulent velocity fluctuation.

  • PDF

A STUDY ON PUPIL DETECTION AND TRACKING METHODS BASED ON IMAGE DATA ANALYSIS

  • CHOI, HANA;GIM, MINJUNG;YOON, SANGWON
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.327-336
    • /
    • 2021
  • In this paper, we will introduce the image processing methods for the remote pupillary light reflex measurement using the video taken by a general smartphone camera without a special device such as an infrared camera. We propose an algorithm for estimate the size of the pupil that changes with light using image data analysis without a learning process. In addition, we will introduce the results of visualizing the change in the pupil size by removing noise from the recorded data of the pupil size measured for each frame of the video. We expect that this study will contribute to the construction of an objective indicator for remote pupillary light reflex measurement in the situation where non-face-to-face communication has become common due to COVID-19 and the demand for remote diagnosis is increasing.

Combining an Edge-Based Method and a Direct Method for Robust 3D Object Tracking

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.167-177
    • /
    • 2021
  • In the field of augmented reality, edge-based methods have been popularly used in tracking textureless 3D objects. However, edge-based methods are inherently vulnerable to cluttered backgrounds. Another way to track textureless or poorly-textured 3D objects is to directly align image intensity of 3D object between consecutive frames. Although the direct methods enable more reliable and stable tracking compared to using local features such as edges, they are more sensitive to occlusion and less accurate than the edge-based methods. Therefore, we propose a method that combines an edge-based method and a direct method to leverage the advantages from each approach. Experimental results show that the proposed method is much robust to both fast camera (or object) movements and occlusion while still working in real time at a frame rate of 18 Hz. The tracking success rate and tracking accuracy were improved by up to 84% and 1.4 pixels, respectively, compared to using the edge-based method or the direct method solely.