• 제목/요약/키워드: High Definition Image Sensor

검색결과 17건 처리시간 0.027초

MEASUREMENT OF NUCLEAR FUEL ROD DEFORMATION USING AN IMAGE PROCESSING TECHNIQUE

  • Cho, Jai-Wan;Choi, Young-Soo;Jeong, Kyung-Min;Shin, Jung-Cheol
    • Nuclear Engineering and Technology
    • /
    • 제43권2호
    • /
    • pp.133-140
    • /
    • 2011
  • In this paper, a deformation measurement technology for nuclear fuel rods is proposed. The deformation measurement system includes a high-definition CMOS image sensor, a lens, a semiconductor laser line beam marker, and optical and mechanical accessories. The basic idea of the proposed deformation measurement system is to illuminate the outer surface of a fuel rod with a collimated laser line beam at an angle of 45 degrees or higher. For this method, it is assumed that a nuclear fuel rod and the optical axis of the image sensor for observing the rod are vertically composed. The relative motion of the fuel rod in the horizontal direction causes the illuminated laser line beam to move vertically along the surface of the fuel rod. The resulting change of the laser line beam position on the surface of the fuel rod is imaged as a parabolic beam in the high-definition CMOS image sensor. An ellipse model is then extracted from the parabolic beam pattern. The center coordinates of the ellipse model are taken as the feature of the deformed fuel rod. The vertical offset of the feature point of the nuclear fuel rod is derived based on the displacement of the offset in the horizontal direction. Based on the experimental results for a nuclear fuel rod sample with a formation of surface crud, an inspection resolution of 50 ${\mu}m$ is achieved using the proposed method. In terms of the degree of precision, this inspection resolution is an improvement of more than 300% from a 150 ${\mu}m$ resolution, which is the conventional measurement criteria required for the deformation of neutron irradiated fuel rods.

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권4호
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.

컬럼 레벨 싸이클릭 아날로그-디지털 변환기를 사용한 고속 프레임 레이트 씨모스 이미지 센서 (High Frame Rate CMOS Image Sensor with Column-wise Cyclic ADC)

  • 임승현;천지민;이동명;채영철;장은수;한건희
    • 대한전자공학회논문지SD
    • /
    • 제47권1호
    • /
    • pp.52-59
    • /
    • 2010
  • 본 논문에서는 고해상도 및 고속 카메라용 column-wise Cyclic ADC 기반의 이미지 센서를 제안한다. 제안된 센서는 면적 및 전력 소모를 최소화 하기 위해 내부 블록에 사용되는 operational transconductance amplifier (OTA) 및 capacitor를 공유하는 기법을 사용하였다. 제안된 ADC는 QVGA급 화소의 이미지 센서로 프로토타입 칩을 제작하여 검증되었다. 측정결과, 최대 프레임 레이트는 120 fps 이며, 전력소모는 130 mW 이다. 전원 전압은 3.3 V가 공급되었고, 프로토타입은 $4.8\;mm\;{\times}\;3.5\;mm$의 실리콘 면적을 차지한다.

영상처리를 이용한 핵연료봉의 변형 검사 (Inspection of the Nuclear Fuel Rod Deformation using an Image Processing)

  • 조재완;최영수
    • 대한전자공학회논문지SP
    • /
    • 제47권1호
    • /
    • pp.91-96
    • /
    • 2010
  • 본 논문에서는 핵연료봉의 변형에 대한 고정도 검사방법을 제안한다. 핵 연료봉과 이를 관측하는 영상 센서의 광축을 수직으로 구성한다. 영상 센서의 광축을 기준으로 45도 또는 그보다 높은 각도로 레이저 라인빔을 연료봉 표면에 조사하면 연료봉의 수평 방향 변위가 영상 센서에서는 수직 방향 변위로 관측된다. 핵 연료봉 표면에 일정 각도로 입사된 레이저 라인빔이 영상 센서면에서는 일정 두께를 갖는 포물선 형태로 관측되게 된다. 센서 화면에 나타나는 일정 두께의 포물선을 영상처리하여 타원으로 모델링하고 타원의 장축과 단축의 기울기를 구한다. 포물선의 변곡점과 모델링한 타원의 장축과 단축이 교차하는 지점을 특징점으로 추출한다. 이와 같은 영상처리 알고리즘을 이용하여 핵 연료봉의 수평방향 변위에 따른 특징점 좌표의 수직방향 편차를 계산한다. 크러드가 형성된 핵연료봉 시편에 대해 고해상도 영상센서를 사용하여 실험한 결과 중성자 조사후 핵연료봉의 변형 검사기준인 $150{\mu}m$ 보다 3배 이상 개선된 $50{\mu}m$ 이하의 검사 정밀도를 달성하였다.

광공진 현상을 이용한 입체 영상센서 및 신호처리 기법 (Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing)

  • 박용화;유장우;박창영;윤희선
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2013년도 추계학술대회 논문집
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

Requirements of processing parameters for Multi-Satellites SAR Data Focusing Software

  • Kwak Sunghee;Kim Kwang Yong;Lee Young-Ran;Shin Dongseok;Jeong Soo;Kim Kyung-Ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2004년도 Proceedings of ISRS 2004
    • /
    • pp.401-404
    • /
    • 2004
  • SAR (Synthetic Aperture Radar) signal data need a focusing procedure to make the information available to the user. In recent SAR systems, various sensing modes and mission operations are applied to acquire high-resolution SAR images. Therefore, in order to develop generalized focusing software for multi-satellites, a regularized parameter configuration that sufficiently represents sensor and platform characteristics of the SAR system is required. The objective of this paper is to introduce the consideration of parameter definition for developing a generalized SAR processor and to discuss the flexibility and extensibility of defined parameters. The proposed parameter configuration can be applied to a SAR processor. Experiments based on real data will show the suitability of the suggested processing parameters.

  • PDF

Mobile Display 장치를 위한 Adaptive-Filter 기반형 선명도 향상 알고리즘의 하드웨어 구현 (Implementation of Sharpness-Enhancement Algorithm based on Adaptive-Filter for Mobile-Display Apparatuses)

  • 임정욱;송진근;이성진;민경중;강봉순
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2007년도 추계종합학술대회
    • /
    • pp.109-112
    • /
    • 2007
  • 디지털 카메라의 출현과 Mobile 장비에서의 카메라 적용으로 인하여 디지털화된 이미지의 화질개선이 지속적으로 연구되고 있다. 특히, 센서로부터 입력된 이미지는 영상으로 출려되기 전 ISP(Image Signal Process) 과정을 거치게 되는데, 이 단계에서 이미지는 고주파 성분의 Noise 제거를 위한 LPF(Low Pass Filter)에 의해 고역의 주파수 성분이 상쇄되는 결과를 가진다. 이에 본 논문에서는 LPF(Low Pass Filter)에 의해 고역의 주파수 성분이 상쇄되는 결과를 가진다. 이에 본 논문에서는 LPF에 의해 Blurring된 이미지를 윤곽선 검출 알고리즘을 사용하지 않고, 이미지 윤곽선이 가질 수 있는 다양한 상태를 고려하여 적절한 계수를 가지는 Adaptive-HPF(High Pass Filter)를 사용함으로써 더욱 선명한 영상을 출력하는 알고리즘을 제안한다. 제안된 알고리즘의 하드웨어 구현시 Total Gate Count는 8700여 개로 Mobile 장치에 적용될 수 있다는 것을 검증하였다.

  • PDF

Manhole Cover Detection from Natural Scene Based on Imaging Environment Perception

  • Liu, Haoting;Yan, Beibei;Wang, Wei;Li, Xin;Guo, Zhenhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권10호
    • /
    • pp.5095-5111
    • /
    • 2019
  • A multi-rotor Unmanned Aerial Vehicle (UAV) system is developed to solve the manhole cover detection problem for the infrastructure maintenance in the suburbs of big city. The visible light sensor is employed to collect the ground image data and a series of image processing and machine learning methods are used to detect the manhole cover. First, the image enhancement technique is employed to improve the imaging effect of visible light camera. An imaging environment perception method is used to increase the computation robustness: the blind Image Quality Evaluation Metrics (IQEMs) are used to percept the imaging environment and select the images which have a high imaging definition for the following computation. Because of its excellent processing effect the adaptive Multiple Scale Retinex (MSR) is used to enhance the imaging quality. Second, the Single Shot multi-box Detector (SSD) method is utilized to identify the manhole cover for its stable processing effect. Third, the spatial coordinate of manhole cover is also estimated from the ground image. The practical applications have verified the outdoor environment adaptability of proposed algorithm and the target detection correctness of proposed system. The detection accuracy can reach 99% and the positioning accuracy is about 0.7 meters.

등온압축성형공법을 이용한 폴리머 렌즈 성형 (Isothermal Compression Molding for a Polymer Optical Lens)

  • 오병도;권현성;김순옥
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2008년도 추계학술대회A
    • /
    • pp.996-999
    • /
    • 2008
  • Aspheric polymer lens fabrication using isothermal compression molding is presented in this paper. Due to increasing definition of an image sensor, higher precision is required by a lens which can be used as a part of an imageforming optical module. Injection molding is a factory standard method for a polymer optical lens. But achievable precision using injection molding has a formidable limitation due to the machining of complex mold structure and melting and cooling down a polymer melt under high pressure condition during forming process. To overcome the precision requirement and limitation using injection molding method, isothermal compression molding is applied to fabrication of a polymer optical lens. The fabrication condition is determined by numerical simulations of temperature distribution and given material properties. Under the found condition, the lens having a high precision can successfully be reproduced and does not show birefringence which results often in optical degradation.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • 제27권1호
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.