• 제목/요약/키워드: University vision

검색결과 4,392건 처리시간 0.028초

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권2호
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

Computer Simulation for Gradual Yellowing of Aged Lens and Its Application for Test Devices

  • Kim, Bog G.;Han, Jeong-Won;Park, Soo-Been
    • Journal of the Optical Society of Korea
    • /
    • 제17권4호
    • /
    • pp.344-349
    • /
    • 2013
  • This paper proposes a simulation algorithm to assess the gradual yellowing vision of the elderly, which refers to the predominance of yellowness in their vision due to aging of the ocular optic media. This algorithm employed the spectral transmittance property of a yellow filter to represent the color appearance perceived by elderly people with yellow vision, and modeled the changes in the color space through a spectrum change in light using the yellow filter effect. The spectral reflectivity data of 1269 Munsell matte color chips were used as reference data. Under the standard conditions of a D65 illuminant and a $10^{\circ}$ observer of 1964 CIE, the spectrum of the 1269 Munsell colors were processed through the yellow filter effect to simulate yellow vision. Various degrees of yellow vision were modeled according to the transmittance percentage of the yellow filter. The color differences before and after the yellow filter effect were calculated using the DE2000 formula, and the color pairs were selected based on the color difference function. These color pairs are distinguishable through normal vision, but the color difference diminishes as the degree of yellow vision increases. Assuming 80% of yellow vision effect, 17 color pairs out of $(1269{\times}1268)/2$ pairs were selected, and for the 90% of yellow vision effect, only 3 color pairs were selected. The result of this study can be utilized for the diagnosis system of gradual yellow vision, making various types of test charts with selected color pairs.

Stimulating Nearly Correct Focus Cues in Stereo Displays

  • Akeley, Kurt;Banks, Martin S.;Hoffman, David M.;Girshick, Anna R.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2008년도 International Meeting on Information Display
    • /
    • pp.39-42
    • /
    • 2008
  • We have developed new display techniques that allow presentation of nearly correct focus cues. Using these techniques, we find that stereo vision is faster and more accurate, and that viewers experience less discomfort, when focus cues are consistent with simulated depth.

  • PDF

완전교정과 저교정 상태에서 조절반응 변화량의 비교 (Comparison between Accommodative Response Change on the Full Vision Correction and Low Vision Correction)

  • 배성현;곽호원
    • 한국안광학회지
    • /
    • 제17권1호
    • /
    • pp.75-81
    • /
    • 2012
  • 목적: 양안개방형 자동굴절검사기를 이용하여 완전교정과 저교정 상태에서 실제 일어나는 조절 반응량을 측정하여 조절의 변화를 알아보고자 하였다. 방법: 20~30세($21.14{\pm}2.00$세)의 대학생 79명(남 58명, 여 21명)을 대상으로 타각적 자각적 굴절검사를 실시하여 완전교정은 시력이 1.0일 때, 저교정은 임의적으로 플러스 렌즈를 부가하여 0.8, 0.7, 0.6일 때의 조절 반응량을 5.0 m, 1.0 m, 0.50 m, 0.33 m, 0.25 m에서 측정하였다. 피검사자를 대상으로 주시거리에 따른 1.0, 0.8, 0.7, 0.6 시력 상태에서 조절 반응량의 변화를 비교 분석하였다. 결과: 우안 좌안 모두 완전교정 상태(1.0)가 저교정 상태(0.7) 보다 조절 반응량의 값이 크게 나타났다(p=0.000). 완전교정 상태가 저교정 상태(0.7) 보다 주시거리가 짧아질수록 조절 반응량의 변화가 더 크게 나타났다. 주시거리에 따른 시력과 조절 반응량의 상관관계는 거리가 짧을수록 더 낮게 조사 되었다. 결론: 장시간 근거리 작업은 조절기능에 영향을 줄 수 있으므로 저교정을 하여 눈의 안위를 편안하게 하는 것이 안정피로 증상을 해소하는데 도움이 될 것이다.

GPU를 이용한 고속 영상 보간법 개발 (Development of high-speed image interpolation method using CUDA)

  • 최학남;박은수;김준철;정용한;김학일
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.300-301
    • /
    • 2008
  • 본 논문에서는 GPU를 이용한 고속 보간법 개발방법에 대해 제안한다. GPU는 흔히 그래픽 연산에 사용되지만 최근에는 GPGPH가 각광을 받고 있다. 특히 NVIDIA에서 발표한 CUDA를 이용하면 GPU를 쉽게 접근하여 프로세싱 할 수 있어 많은 분야에서 GPU를 활용하고 있다. 본 논문에서는 실제 CUDA를 이용하여 여러 가지 보간법에 대한 알고리즘을 구현하여 CUDA의 성능을 확인하였다. CPU에서 구현한 알고리즘과 CUDA를 이용한 알고리즘을 비교했을 때 메모리 할당 및 전송부분을 제외한 수순 프로세싱 시간을 보면 CPU에서 훨씬 좋은 성능을 나타내었고, 메모리 할당 및 전송을 고려했을 때 작은 사이즈 영상에서는 오히려 역효과가 나타났고, 대용량 영상에서는 좋은 성능을 나타냄을 확인하였다.

  • PDF

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • 제20권4호
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.

A Platform-Based SoC Design for Real-Time Stereo Vision

  • Yi, Jong-Su;Park, Jae-Hwa;Kim, Jun-Seong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제12권2호
    • /
    • pp.212-218
    • /
    • 2012
  • A stereo vision is able to build three-dimensional maps of its environment. It can provide much more complete information than a 2D image based vision but has to process, at least, that much more data. In the past decade, real-time stereo has become a reality. Some solutions are based on reconfigurable hardware and others rely on specialized hardware. However, they are designed for their own specific applications and are difficult to extend their functionalities. This paper describes a vision system based on a System on a Chip (SoC) platform. A real-time stereo image correlator is implemented using Sum of Absolute Difference (SAD) algorithm and is integrated into the vision system using AMBA bus protocol. Since the system is designed on a pre-verified platform it can be easily extended in its functionality increasing design productivity. Simulation results show that the vision system is suitable for various real-time applications.

얇은 막대 배치작업을 위한 최적의 가중치 행렬을 사용한 실시간 로봇 비젼 제어기법 (Real-time Robotic Vision Control Scheme Using Optimal Weighting Matrix for Slender Bar Placement Task)

  • 장민우;김재명;장완식
    • 한국생산제조학회지
    • /
    • 제26권1호
    • /
    • pp.50-58
    • /
    • 2017
  • This paper proposes a real-time robotic vision control scheme using the weighting matrix to efficiently process the vision data obtained during robotic movement to a target. This scheme is based on the vision system model that can actively control the camera parameter and robotic position change over previous studies. The vision control algorithm involves parameter estimation, joint angle estimation, and weighting matrix models. To demonstrate the effectiveness of the proposed control scheme, this study is divided into two parts: not applying the weighting matrix and applying the weighting matrix to the vision data obtained while the camera is moving towards the target. Finally, the position accuracy of the two cases is compared by performing the slender bar placement task experimentally.

Image-based structural dynamic displacement measurement using different multi-object tracking algorithms

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • 제17권6호
    • /
    • pp.935-956
    • /
    • 2016
  • With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system.

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.