• Title/Summary/Keyword: High Definition Image Sensor

Search Result 17, Processing Time 0.029 seconds

MEASUREMENT OF NUCLEAR FUEL ROD DEFORMATION USING AN IMAGE PROCESSING TECHNIQUE

  • Cho, Jai-Wan;Choi, Young-Soo;Jeong, Kyung-Min;Shin, Jung-Cheol
    • Nuclear Engineering and Technology
    • /
    • v.43 no.2
    • /
    • pp.133-140
    • /
    • 2011
  • In this paper, a deformation measurement technology for nuclear fuel rods is proposed. The deformation measurement system includes a high-definition CMOS image sensor, a lens, a semiconductor laser line beam marker, and optical and mechanical accessories. The basic idea of the proposed deformation measurement system is to illuminate the outer surface of a fuel rod with a collimated laser line beam at an angle of 45 degrees or higher. For this method, it is assumed that a nuclear fuel rod and the optical axis of the image sensor for observing the rod are vertically composed. The relative motion of the fuel rod in the horizontal direction causes the illuminated laser line beam to move vertically along the surface of the fuel rod. The resulting change of the laser line beam position on the surface of the fuel rod is imaged as a parabolic beam in the high-definition CMOS image sensor. An ellipse model is then extracted from the parabolic beam pattern. The center coordinates of the ellipse model are taken as the feature of the deformed fuel rod. The vertical offset of the feature point of the nuclear fuel rod is derived based on the displacement of the offset in the horizontal direction. Based on the experimental results for a nuclear fuel rod sample with a formation of surface crud, an inspection resolution of 50 ${\mu}m$ is achieved using the proposed method. In terms of the degree of precision, this inspection resolution is an improvement of more than 300% from a 150 ${\mu}m$ resolution, which is the conventional measurement criteria required for the deformation of neutron irradiated fuel rods.

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.

High Frame Rate CMOS Image Sensor with Column-wise Cyclic ADC (컬럼 레벨 싸이클릭 아날로그-디지털 변환기를 사용한 고속 프레임 레이트 씨모스 이미지 센서)

  • Lim, Seung-Hyun;Cheon, Ji-Min;Lee, Dong-Myung;Chae, Young-Cheol;Chang, Eun-Soo;Han, Gun-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.1
    • /
    • pp.52-59
    • /
    • 2010
  • This paper proposes a high-resolution and high-frame rate CMOS image sensor with column-wise cyclic ADC. The proposed ADC uses the sharing techniques of OTAs and capacitors for low-power consumption and small silicon area. The proposed ADC was verified implementing the prototype chip as QVGA image sensor. The measured maximum frame rate is 120 fps, and the power consumption is 130 mW. The power supply is 3.3 V, and the die size is $4.8\;mm\;{\times}\;3.5\;mm$. The prototype chip was fabricated in a 2-poly 3-metal $0.35-{\mu}m$ CMOS process.

Inspection of the Nuclear Fuel Rod Deformation using an Image Processing (영상처리를 이용한 핵연료봉의 변형 검사)

  • Cho, Jai-Wan;Choi, Young-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.91-96
    • /
    • 2010
  • In this paper, a deformation measurement technology of the nuclear fuel rod is proposed. The deformation measurement system include high definition CCD or CMOS image sensor, lens, semiconductor laser line beam marker, and optical & mechanical accessories. The basic idea of the deformation measurement is to illuminate the outer surface of the fuel rod with collimated laser line beam at an angle of 45 degrees or higher. The relative motion of the fuel rod in the horizontal direction causes the illuminated laser line beam to move vertically along the surface of the fuel rod. The resulting change of laser line beam position in the surface of the fuel rod is imaged as the parabolic beam in the high definition CCD or CMOS image sensor. From the parabolic beam pattern, the ellipse model is extracted. And the slope of the long and the short axis of the ellipse model is found. The crossing point between the saddle point of the parabolic beam and the long & short axis of the ellipse model is taken as the feature of the deformed fuel rod. The vertical offset between feature points before and after fuel rod deformation is calculated. From the experimental results, $50\;{\mu}m$ inspection resolution is acquired using the proposed method, which is three times enhanced than the conventional criterion ($150\;{\mu}m$) of the guide for the inspection of the nuclear fuel rod.

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

Requirements of processing parameters for Multi-Satellites SAR Data Focusing Software

  • Kwak Sunghee;Kim Kwang Yong;Lee Young-Ran;Shin Dongseok;Jeong Soo;Kim Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.401-404
    • /
    • 2004
  • SAR (Synthetic Aperture Radar) signal data need a focusing procedure to make the information available to the user. In recent SAR systems, various sensing modes and mission operations are applied to acquire high-resolution SAR images. Therefore, in order to develop generalized focusing software for multi-satellites, a regularized parameter configuration that sufficiently represents sensor and platform characteristics of the SAR system is required. The objective of this paper is to introduce the consideration of parameter definition for developing a generalized SAR processor and to discuss the flexibility and extensibility of defined parameters. The proposed parameter configuration can be applied to a SAR processor. Experiments based on real data will show the suitability of the suggested processing parameters.

  • PDF

Implementation of Sharpness-Enhancement Algorithm based on Adaptive-Filter for Mobile-Display Apparatuses (Mobile Display 장치를 위한 Adaptive-Filter 기반형 선명도 향상 알고리즘의 하드웨어 구현)

  • Im, Jeong-Uk;Song, Jin-Gun;Lee, Sung-Jin;Min, Kyoung-Joong;Kang, Bong-Soon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.109-112
    • /
    • 2007
  • Definition-Enhancement of the digitalized image has been being made researches continuously due to application a camera to a mobile-apparatus and the advent of a digital camera. In particular, the inputted image from a sensor goes through the process of ISP(Image Signal Process) prior to output as a visual image. The high-frequency components are offset by LPF(Low Pass Filter) that eliminates the noise of high spatial-frequency at the moment. In this paper, we propose an algorithm that outputs more vivid image by using adaptive-HPF(High Pass Filter) that has apt coefficients for diverse conditions of an image edge, nevertheless we do not employ any Edge-Detection algorithm to enhance a blurred image.

  • PDF

Manhole Cover Detection from Natural Scene Based on Imaging Environment Perception

  • Liu, Haoting;Yan, Beibei;Wang, Wei;Li, Xin;Guo, Zhenhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5095-5111
    • /
    • 2019
  • A multi-rotor Unmanned Aerial Vehicle (UAV) system is developed to solve the manhole cover detection problem for the infrastructure maintenance in the suburbs of big city. The visible light sensor is employed to collect the ground image data and a series of image processing and machine learning methods are used to detect the manhole cover. First, the image enhancement technique is employed to improve the imaging effect of visible light camera. An imaging environment perception method is used to increase the computation robustness: the blind Image Quality Evaluation Metrics (IQEMs) are used to percept the imaging environment and select the images which have a high imaging definition for the following computation. Because of its excellent processing effect the adaptive Multiple Scale Retinex (MSR) is used to enhance the imaging quality. Second, the Single Shot multi-box Detector (SSD) method is utilized to identify the manhole cover for its stable processing effect. Third, the spatial coordinate of manhole cover is also estimated from the ground image. The practical applications have verified the outdoor environment adaptability of proposed algorithm and the target detection correctness of proposed system. The detection accuracy can reach 99% and the positioning accuracy is about 0.7 meters.

Isothermal Compression Molding for a Polymer Optical Lens (등온압축성형공법을 이용한 폴리머 렌즈 성형)

  • Oh, Byung-Do;Kwon, Hyun-Sung;Kim, Sun-Ok
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.996-999
    • /
    • 2008
  • Aspheric polymer lens fabrication using isothermal compression molding is presented in this paper. Due to increasing definition of an image sensor, higher precision is required by a lens which can be used as a part of an imageforming optical module. Injection molding is a factory standard method for a polymer optical lens. But achievable precision using injection molding has a formidable limitation due to the machining of complex mold structure and melting and cooling down a polymer melt under high pressure condition during forming process. To overcome the precision requirement and limitation using injection molding method, isothermal compression molding is applied to fabrication of a polymer optical lens. The fabrication condition is determined by numerical simulations of temperature distribution and given material properties. Under the found condition, the lens having a high precision can successfully be reproduced and does not show birefringence which results often in optical degradation.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.