• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.03 seconds

Development of Vision System Model for Manipulator's Assemble task (매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

Development of a Tank Crew Protection System Using Moving Object Area Detection from Vision based (비전 기반 움직임 영역 탐지를 이용한 전차 승무원 보호 시스템 개발)

  • Choi, Kwang-Mo;Jang, Dong-Sik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.8 no.2 s.21
    • /
    • pp.14-21
    • /
    • 2005
  • This paper describes the system for detecting the tank crew's(loader's) hand, arm, head and the upper half of the body in a danger area between the turret ceiling and the upper breech mechanism by computer vision-based method. This system informs danger of pressed to death to gunner and commander for the safety of operating mission. The camera mounted ort the top portion of the turret ceiling. The system sets search moving object from this image and detects by using change of image, laplacian operator and clustering algorithm in this area. It alarms the tank crews when it's judged that dangerous situation for operating mission. The result In this experiment shows that the detection rate maintains in $81{\sim}98$ percents.

Performance Analysis of DNN inference using OpenCV Built in CPU and GPU Functions (OpenCV 내장 CPU 및 GPU 함수를 이용한 DNN 추론 시간 복잡도 분석)

  • Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.1
    • /
    • pp.75-78
    • /
    • 2022
  • Deep Neural Networks (DNN) has become an essential data processing architecture for the implementation of multiple computer vision tasks. Recently, DNN-based algorithms achieve much higher recognition accuracy than traditional algorithms based on shallow learning. However, training and inference DNNs require huge computational capabilities than daily usage purposes of computers. Moreover, with increased size and depth of DNNs, CPUs may be unsatisfactory since they use serial processing by default. GPUs are the solution that come up with greater speed compared to CPUs because of their Parallel Processing/Computation nature. In this paper, we analyze the inference time complexity of DNNs using well-known computer vision library, OpenCV. We measure and analyze inference time complexity for three cases, CPU, GPU-Float32, and GPU-Float16.

Improvement of the Optical Characteristics of Vision System for Precision Screws Using Ray Tracing Simulation (광선추적을 이용한 정밀나사 비전검사용 광학계의 결상특성 향상)

  • Baek, Soon-Bo;Lee, Ki-Yean;Joo, Won-Jong;Park, Keun;Ra, Seung-Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.9
    • /
    • pp.1094-1102
    • /
    • 2011
  • Recent trends for the miniaturization and weight reduction of portable electronic parts is the use of subminiature components. Assembly of the miniaturized components requires subminiature screws of which pitch sizes are in a micrometer scale. To produce such a subminiature screw with high precision threads, not only a precision forming technology but also high-precision measurement technique is required. In the present work, a vision inspection system is developed to measure the thread profile of a subminiature screw. Optical simulation based on a ray tracing method is used to design and analyze the optical system of the vision inspection apparatus. Through this simulation, optical performance of the developed vision inspection system is optimized. The image processing algorithm for the precision screw inspection is also discussed.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

A computer vision-based approach for crack detection in ultra high performance concrete beams

  • Roya Solhmirzaei;Hadi Salehi;Venkatesh Kodur
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.341-348
    • /
    • 2024
  • Ultra-high-performance concrete (UHPC) has received remarkable attentions in civil infrastructure due to its unique mechanical characteristics and durability. UHPC gains increasingly dominant in essential structural elements, while its unique properties pose challenges for traditional inspection methods, as damage may not always manifest visibly on the surface. As such, the need for robust inspection techniques for detecting cracks in UHPC members has become imperative as traditional methods often fall short in providing comprehensive and timely evaluations. In the era of artificial intelligence, computer vision has gained considerable interest as a powerful tool to enhance infrastructure condition assessment with image and video data collected from sensors, cameras, and unmanned aerial vehicles. This paper presents a computer vision-based approach employing deep learning to detect cracks in UHPC beams, with the aim of addressing the inherent limitations of traditional inspection methods. This work leverages computer vision to discern intricate patterns and anomalies. Particularly, a convolutional neural network architecture employing transfer learning is adopted to identify the presence of cracks in the beams. The proposed approach is evaluated with image data collected from full-scale experiments conducted on UHPC beams subjected to flexural and shear loadings. The results of this study indicate the applicability of computer vision and deep learning as intelligent methods to detect major and minor cracks and recognize various damage mechanisms in UHPC members with better efficiency compared to conventional monitoring methods. Findings from this work pave the way for the development of autonomous infrastructure health monitoring and condition assessment, ensuring early detection in response to evolving structural challenges. By leveraging computer vision, this paper contributes to usher in a new era of effectiveness in autonomous crack detection, enhancing the resilience and sustainability of UHPC civil infrastructure.

Ultrasonic and Vision Data Fusion for Object Recognition (초음파센서와 시각센서의 융합을 이용한 물체 인식에 관한 연구)

  • Ko, Joong-Hyup;Kim, Wan-Ju;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.417-421
    • /
    • 1992
  • Ultrasonic and vision data need to be fused for efficient object recognition, especially in mobile robot navigation. In the proposed approach, the whole ultrasonic echo signal is utilized and data fusion is performed based on each sensor's characteristic. It is shown to be effective through the experiment results.

  • PDF

Vision-Based Mobile Robot Navigation by Robust Path Line Tracking (시각을 이용한 이동 로봇의 강건한 경로선 추종 주행)

  • Son, Min-Hyuk;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.178-186
    • /
    • 2011
  • Line tracking is a well defined method of mobile robot navigation. It is simple in concept, technically easy to implement, and already employed in many industrial sites. Among several different line tracking methods, magnetic sensing is widely used in practice. In comparison, vision-based tracking is less popular due mainly to its sensitivity to surrounding conditions such as brightness and floor characteristics although vision is the most powerful robotic sensing capability. In this paper, a vision-based robust path line detection technique is proposed for the navigation of a mobile robot assuming uncontrollable surrounding conditions. The technique proposed has four processing steps; color space transformation, pixel-level line sensing, block-level line sensing, and robot navigation control. This technique effectively uses hue and saturation color values in the line sensing so to be insensitive to the brightness variation. Line finding in block-level makes not only the technique immune from the error of line pixel detection but also the robot control easy. The proposed technique was tested with a real mobile robot and proved its effectiveness.

An Application of Computer Vision System for the Determination of Object Position in the Plane (평면상에 있는 물체 위치 결정을 위한 컴퓨터 비젼 시스템의 응용)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.7 no.2
    • /
    • pp.62-68
    • /
    • 1998
  • This paper presents the application of computer vision for the purpose of determining the position of the unknown object in the plane. The presented control method is to estimate the six view parameters representing the relationship between the image plane coordinates and the real physical coordinates. The estimation of six parameters is indispensable for transforming the 2-dimensional camera coordinates to the 3-dimensional spatial coordinates. Then, the position of unknown point is estimated based on the estimated parameters depending on the cameras. The suitability of this control scheme is demonstrated experimentally by determining position of the unknown object in the plane.

  • PDF

Vision chip for edge detection with a function of pixel FPN reduction (픽셀의 고정 패턴 잡음을 감소시킨 윤곽 검출용 시각칩)

  • Suh, Sung-Ho;Kim, Jung-Hwan;Kong, Jae-Sung;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.3
    • /
    • pp.191-197
    • /
    • 2005
  • When fabricating a vision chip, we should consider the noise problem, such as the fixed pattern noise(FPN) due to the process variation. In this paper, we propose an edge-detection circuit based on biological retina using the offset-free column readout circuit to reduce the FPN occurring in the photo-detector. The offset-free column readout circuit consists of one source follower, one capacitor and five transmission gates. As a result, it is simpler and smaller than a general correlated double sampling(CDS) circuit. A vision chip for edge detection has been designed and fabricated using $0.35\;{\mu}m$ 2-poly 4-metal CMOS technology, and its output characteristics have been investigated.