• Title/Summary/Keyword: Vision Technology

Search Result 2,046, Processing Time 0.032 seconds

Development of Self-Adaptive Meta-Heuristic Optimization Algorithm: Self-Adaptive Vision Correction Algorithm (자가 적응형 메타휴리스틱 최적화 알고리즘 개발: Self-Adaptive Vision Correction Algorithm)

  • Lee, Eui Hoon;Lee, Ho Min;Choi, Young Hwan;Kim, Joong Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.6
    • /
    • pp.314-321
    • /
    • 2019
  • The Self-Adaptive Vision Correction Algorithm (SAVCA) developed in this study was suggested for improving usability by modifying four parameters (Modulation Transfer Function Rate, Astigmatic Rate, Astigmatic Factor and Compression Factor) except for Division Rate 1 and Division Rate 2 among six parameters in Vision Correction Algorithm (VCA). For verification, SAVCA was applied to two-dimensional mathematical benchmark functions (Six hump camel back / Easton and fenton) and 30-dimensional mathematical benchmark functions (Schwefel / Hyper sphere). It showed superior performance to other algorithms (Harmony Search, Water Cycle Algorithm, VCA, Genetic Algorithms with Floating-point representation, Shuffled Complex Evolution algorithm and Modified Shuffled Complex Evolution). Finally, SAVCA showed the best results in the engineering problem (speed reducer design). SAVCA, which has not been subjected to complicated parameter adjustment procedures, will be applicable in various fields.

Analysis of Digital Vision Measurement Resolution by Influence Parameters (디지털 영상 계측 기술의 영향인자에 따른 정밀도 분석)

  • Kim, Kwang-Yeom;Kim, Chang-Yong;Lee, Seung-Do;Lee, Chung-In
    • Journal of the Korean Geotechnical Society
    • /
    • v.23 no.12
    • /
    • pp.109-116
    • /
    • 2007
  • This study has reviewed the applicability of displacement measurement by using a digital vision technique based on typical photogrammetric methods. In this study, a series of experimental measurements have been performed in order to improve the accuracy of digital vision measurement by establishing criteria of factors of various vision measurements. It is found that the digital vision measurement tends to show higher accuracy as the image size(resolution) and the focal length become larger and the distance to an object becomes closer. It is also observed that measurement error decreases with processing as many images as possible in various angles. Applicability on high-resolution displacement measurement is proved by applying the digital vision measurement developed in this study to a large scale loading test of concrete lining.

A computer vision-based approach for crack detection in ultra high performance concrete beams

  • Roya Solhmirzaei;Hadi Salehi;Venkatesh Kodur
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.341-348
    • /
    • 2024
  • Ultra-high-performance concrete (UHPC) has received remarkable attentions in civil infrastructure due to its unique mechanical characteristics and durability. UHPC gains increasingly dominant in essential structural elements, while its unique properties pose challenges for traditional inspection methods, as damage may not always manifest visibly on the surface. As such, the need for robust inspection techniques for detecting cracks in UHPC members has become imperative as traditional methods often fall short in providing comprehensive and timely evaluations. In the era of artificial intelligence, computer vision has gained considerable interest as a powerful tool to enhance infrastructure condition assessment with image and video data collected from sensors, cameras, and unmanned aerial vehicles. This paper presents a computer vision-based approach employing deep learning to detect cracks in UHPC beams, with the aim of addressing the inherent limitations of traditional inspection methods. This work leverages computer vision to discern intricate patterns and anomalies. Particularly, a convolutional neural network architecture employing transfer learning is adopted to identify the presence of cracks in the beams. The proposed approach is evaluated with image data collected from full-scale experiments conducted on UHPC beams subjected to flexural and shear loadings. The results of this study indicate the applicability of computer vision and deep learning as intelligent methods to detect major and minor cracks and recognize various damage mechanisms in UHPC members with better efficiency compared to conventional monitoring methods. Findings from this work pave the way for the development of autonomous infrastructure health monitoring and condition assessment, ensuring early detection in response to evolving structural challenges. By leveraging computer vision, this paper contributes to usher in a new era of effectiveness in autonomous crack detection, enhancing the resilience and sustainability of UHPC civil infrastructure.

Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures

  • Lee, Jong Jae;Fukuda, Yoshio;Shinozuka, Masanobu;Cho, Soojin;Yun, Chung-Bang
    • Smart Structures and Systems
    • /
    • v.3 no.3
    • /
    • pp.373-384
    • /
    • 2007
  • For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test.

Text-To-Vision Player - Converting Text to Vision Based on TVML Technology -

  • Hayashi, Masaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.799-802
    • /
    • 2009
  • We have been studying the next generation of video creation solution based on TVML (TV program Making Language) technology. TVML is a well-known scripting language for computer animation and a TVML Player interprets the script to create video content using real-time 3DCG and synthesized voices. TVML has a long history proposed back in 1996 by NHK, however, the only available Player has been the one made by NHK for years. We have developed a new TVML Player from scratch and named it T2V (Text-To-Vision) Player. Due to the development from scratch, the code is compact, light and fast, and extendable and portable. Moreover, the new T2V Player performs not only a playback of TVML script but also a Text-To-Vision conversion from input written in XML format or just a mere plane text to videos by using 'Text-filter' that can be added as a plug-in of the Player. We plan to make it public as freeware from early 2009 in order to stimulate User-Generated-Content and a various kinds of services running on the Internet and media industry. We think that our T2V Player would be a key technology for upcoming new movement.

  • PDF

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.4
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.

A Platform-Based SoC Design for Real-Time Stereo Vision

  • Yi, Jong-Su;Park, Jae-Hwa;Kim, Jun-Seong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.12 no.2
    • /
    • pp.212-218
    • /
    • 2012
  • A stereo vision is able to build three-dimensional maps of its environment. It can provide much more complete information than a 2D image based vision but has to process, at least, that much more data. In the past decade, real-time stereo has become a reality. Some solutions are based on reconfigurable hardware and others rely on specialized hardware. However, they are designed for their own specific applications and are difficult to extend their functionalities. This paper describes a vision system based on a System on a Chip (SoC) platform. A real-time stereo image correlator is implemented using Sum of Absolute Difference (SAD) algorithm and is integrated into the vision system using AMBA bus protocol. Since the system is designed on a pre-verified platform it can be easily extended in its functionality increasing design productivity. Simulation results show that the vision system is suitable for various real-time applications.

Image-based structural dynamic displacement measurement using different multi-object tracking algorithms

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • v.17 no.6
    • /
    • pp.935-956
    • /
    • 2016
  • With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system.

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.