• Title/Summary/Keyword: image Vision

Search Result 2,594, Processing Time 0.034 seconds

An Optimal Combination of Illumination Intensity and Lens Aperture for Color Image Analysis

  • Chang, Y. C.
    • Agricultural and Biosystems Engineering
    • /
    • v.3 no.1
    • /
    • pp.35-43
    • /
    • 2002
  • The spectral color resolution of an image is very important in color image analysis. Two factors influencing the spectral color resolution of an image are illumination intensity and lens aperture for a selected vision system. An optimal combination of illumination intensity and lens aperture for color image analysis was determined in the study. The method was based on a model of dynamic range defined as the absolute difference between digital values of selected foreground and background color in the image. The role of illumination intensity in machine vision was also described and a computer program for simulating the optimal combination of two factors was implemented for verifying the related algorithm. It was possible to estimate the non-saturating range of the illumination intensity (input voltage in the study) and the lens aperture by using a model of dynamic range. The method provided an optimal combination of the illumination intensity and the lens aperture, maximizing the color resolution between colors of interest in color analysis, and the estimated color resolution at the combination for a given vision system configuration.

  • PDF

Design Vision Box base on Embedded Platform (Embedded Platform을 기반으로 하는 Vision Box 설계)

  • Kim, Pan-Kyu;Hoang, Tae-Moon;Park, Sang-Su;Lee, Jong-Hyeok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1103-1106
    • /
    • 2005
  • The purpose of this research is design of Vision Box which can capture an image which inputs through the camera and understand movement of object included in captured image. In design, we tried to satisfy user's requirements. We made Vision Box can analyze movement of object in image captured by camera without additional instruments. In addition, it is possible that communicate with PLC and operate Vision Box by remote control. We could verify the Vision Box capability by using it to automobile engine pattern analysis. We expect the Vision Box will be used various industrial fields.

  • PDF

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

Extraction of depth information on moving objects using a C40 DSP board (C40 DSP 보드를 이용한 이동 물체의 깊이 정보 추출)

  • 박태수;모준혁;최익수;박종안
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.5-7
    • /
    • 1996
  • We propose a triangulation method based on stereo vision angles. We setup stereo vision systems which extract the depth information to a moving object by detecting a moving object using difference image method and obtaining the depth information by the triangulation method based on stereo vision angles. The feature point of a moving object is used the geometrical center of the moving object, and the proposed vision system has the accuracy of 0.2mm in the range of 400mm.

  • PDF

A Vision System for the Inspection of Shaft Worm (비전 시스템을 이용한 샤프트 웜 외관검사기 개발)

  • Bark, Jun-Sung;Kim, Tae-Ken;Kim, Han-Su;Yang, Woo-Suck
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.184-186
    • /
    • 2004
  • This paper is about vision system that exhibits automatic examination of the conditions of shaft's worm. The system is composed of three part : image acquisition, vision algorithm, and user interface. The image acquisition part is composed of motor control, illumination and optics. The vision algorithm examines the parts by labeling algorithm using shaft image. User interface is divided into two parts, user interface for feature registering with control value settings and user interface for examination operation. The automatic inspection system of this research is a tool for final examination of shaft worm. This tool can be practically used in production lines with simple adjustments.

  • PDF

A Vision System for the Inspection of Shaft Worm (비전 시스템을 이용한 샤프트 웜 외관검사기 개발)

  • Ko, Eun-Ji;Park, Jun-Sung;Kim, Hyoung-Gi;Yang, Woo-Suck
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.903-904
    • /
    • 2006
  • This paper is about a vision system that exhibits automatic examination of the conditions of shaft's worm. The system is composed of three part : image acquisition, vision algorithm, and user interface. The image acquisition part is composed of motor control, illumination and optics. The vision algorithm examines the parts using shaft image. User interface is divided into two parts, user interface for feature registering with control value settings and user interface for examination operation. The automatic inspection system introduced in this paper can be used as a tool for final examination of shaft worm.

  • PDF

Implementation of Vision System combining Character and Color Recognition (문자 및 색 인식을 혼용한 검사시스템의 구현)

  • Yang, Woo-Suk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.221-225
    • /
    • 2016
  • This paper is about vision system that exhibits automatic examination of the conditions of fuses and relay boxes using a camera. Proposed vision system is composed of three parts: image acquisition, vision algorithm, and user interface. The image acquisition part is composed of illumination and optics. The vision algorithmis the examining part, using the grabbed fuse box image. Lastly, user interface is divided into two parts, user interface for registering features of fuse box and user interface for examination operation.

Investigation of the super-resolution methods for vision based structural measurement

  • Wu, Lijun;Cai, Zhouwei;Lin, Chenghao;Chen, Zhicong;Cheng, Shuying;Lin, Peijie
    • Smart Structures and Systems
    • /
    • v.30 no.3
    • /
    • pp.287-301
    • /
    • 2022
  • The machine-vision based structural displacement measurement methods are widely used due to its flexible deployment and non-contact measurement characteristics. The accuracy of vision measurement is directly related to the image resolution. In the field of computer vision, super-resolution reconstruction is an emerging method to improve image resolution. Particularly, the deep-learning based image super-resolution methods have shown great potential for improving image resolution and thus the machine-vision based measurement. In this article, we firstly review the latest progress of several deep learning based super-resolution models, together with the public benchmark datasets and the performance evaluation index. Secondly, we construct a binocular visual measurement platform to measure the distances of the adjacent corners on a chessboard that is universally used as a target when measuring the structure displacement via machine-vision based approaches. And then, several typical deep learning based super resolution algorithms are employed to improve the visual measurement performance. Experimental results show that super-resolution reconstruction technology can improve the accuracy of distance measurement of adjacent corners. According to the experimental results, one can find that the measurement accuracy improvement of the super resolution algorithms is not consistent with the existing quantitative performance evaluation index. Lastly, the current challenges and future trends of super resolution algorithms for visual measurement applications are pointed out.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation (용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구)

  • You, Won-Sang;Na, Suck-Joo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.3
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.