• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.031 seconds

Development of A Vision-based Lane Detection System with Considering Sensor Configuration Aspect (센서 구성을 고려한 비전 기반 차선 감지 시스템 개발)

  • Park Jaehak;Hong Daegun;Huh Kunsoo;Park Jahnghyon;Cho Dongil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.97-104
    • /
    • 2005
  • Vision-based lane sensing systems require accurate and robust sensing performance in lane detection. Besides, there exists trade-off between the computational burden and processor cost, which should be considered for implementing the systems in passenger cars. In this paper, a stereo vision-based lane detection system is developed with considering sensor configuration aspects. An inverse perspective mapping method is formulated based on the relative correspondence between the left and right cameras so that the 3-dimensional road geometry can be reconstructed in a robust manner. A new monitoring model for estimating the road geometry parameters is constructed to reduce the number of the measured signals. The selection of the sensor configuration and specifications is investigated by utilizing the characteristics of standard highways. Based on the sensor configurations, it is shown that appropriate sensing region on the camera image coordinate can be determined. The proposed system is implemented on a passenger car and verified experimentally.

Application of Deep Learning Algorithm for Detecting Construction Workers Wearing Safety Helmet Using Computer Vision (건설현장 근로자의 안전모 착용 여부 검출을 위한 컴퓨터 비전 기반 딥러닝 알고리즘의 적용)

  • Kim, Myung Ho;Shin, Sung Woo;Suh, Yong Yoon
    • Journal of the Korean Society of Safety
    • /
    • v.34 no.6
    • /
    • pp.29-37
    • /
    • 2019
  • Since construction sites are exposed to outdoor environments, working conditions are significantly dangerous. Thus, wearing of the personal protective equipments such as safety helmet is very important for worker safety. However, construction workers are often wearing-off the helmet as inconvenient and uncomportable. As a result, a small mistake may lead to serious accident. For this, checking of wearing safety helmet is important task to safety managers in field. However, due to the limited time and manpower, the checking can not be executed for every individual worker spread over a large construction site. Therefore, if an automatic checking system is provided, field safety management should be performed more effectively and efficiently. In this study, applicability of deep learning based computer vision technology is investigated for automatic checking of wearing safety helmet in construction sites. Faster R-CNN deep learning algorithm for object detection and classification is employed to develop the automatic checking model. Digital camera images captured in real construction site are used to validate the proposed model. Based on the results, it is concluded that the proposed model may effectively be used for automatic checking of wearing safety helmet in construction site.

An Vision System for Traffic sign Recognition (교통표지판 인식을 위한 비젼시스템)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.471-476
    • /
    • 2004
  • This paper presents an active vision system for on-line traffic sign recognition. The system is composed of two cameras, one is equipped with a wide-angle lens and the other with a telephoto lends, and a PC with an image processing board. The system first detects candidates for traffic signs in the wide-angle image using color, intensity, and shape information. For each candidate, the telephoto-camera is directed to its predicted position to capture the candidate in a large size in the image. The recognition algorithm is designed by intensively using built in functions of an off-the-shelf image processing board to realize both easy implementation and fast recognition. The results of on-road experiments show the feasibility of the system.

A Study on the Image Optimization for Digital Vision Measurement (디지털 영상 계측을 위한 이미지 최적화 연구)

  • Kim, Kwang-Yeom;Yoon, Hyo-Kwan;Kim, Chang-Yong;Yim, Sung-Bin;Choi, Chang-Ho;Lee, Seung-Do
    • Tunnel and Underground Space
    • /
    • v.20 no.6
    • /
    • pp.421-433
    • /
    • 2010
  • The digital images to be used for digital vision measurement like digital face mapping and photogrammetric monitoring in construction could be influenced by various conditions such as a kind of light, the intensity of radiation, camera set-up and so on. Because it is very difficult to assess the rock mass from the digital images acquired under different circumstances, some tests and analysis are carried out to modify the images to be suitable and consistent for the digital image optimization. As a result, the recommended conditions for the acquisition of optimized digital images are suggested.

Modeling of vision based robot formation control using fuzzy logic controller and extended Kalman filter

  • Rusdinar, Angga;Kim, Sung-Shin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.238-244
    • /
    • 2012
  • A modeling of vision based robot formation control system using fuzzy logic controller and extended Kalman filter is presented in this paper. The main problems affecting formation controls using fuzzy logic controller and vision based robots are: a robot's position in a formation need to be maintained, how to develop the membership function in order to obtain the optimal fuzzy system control that has the ability to do the formation control and the noise coming from camera process changes the position of references view. In order to handle these problems, we propose a fuzzy logic controller system equipped with a dynamic output membership function that controls the speed of the robot wheels to handle the maintenance position in formation. The output membership function changes over time based on changes in input at time t-1 to t. The noises appearing in image processing change the virtual target point positions are handled by Extended Kalman filter. The virtual target positions are established in order to define the formations. The virtual target point positions can be changed at any time in accordance with the desired formation. These algorithms have been validated through simulation. The simulations confirm that the follower robots reach their target point in a short time and are able to maintain their position in the formation although the noises change the target point positions.

Odor Source Tracking of Mobile Robot with Vision and Odor Sensors (비전과 후각 센서를 이용한 이동로봇의 냄새 발생지 추적)

  • Ji, Dong-Min;Lee, Jeong-Jun;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.698-703
    • /
    • 2006
  • This paper proposes an approach to search for the odor source using an autonomous mobile robot equipped with vision and odor sensors. The robot is initially navigating around the specific area with vision system until it looks for an object in the camera image. The robot approaches the object found in the field of view and checks it with the odor sensors if it is releasing odor. If so, the odor is classified and localized with the classification algorithm based on neural network The AMOR(Autonomous Mobile Olfactory Robot) was built up and used for the experiments. Experimental results on the classification and localization of odor sources show the validity of the proposed algorithm.

Automatic detection system for surface defects of home appliances based on machine vision (머신비전 기반의 가전제품 표면결함 자동검출 시스템)

  • Lee, HyunJun;Jeong, HeeJa;Lee, JangGoon;Kim, NamHo
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.47-55
    • /
    • 2022
  • Quality control in the smart factory manufacturing process is an important factor. Currently, quality inspection of home appliance manufacturing parts produced by the mold process is mostly performed with the naked eye of the operator, resulting in a high error rate of inspection. In order to improve the quality competition, an automatic defect detection system was designed and implemented. The proposed system acquires an image by photographing an object with a high-performance scan camera at a specific location, and reads defective products due to scratches, dents, and foreign substances according to the vision inspection algorithm. In this study, the depth-based branch decision algorithm (DBD) was developed to increase the recognition rate of defects due to scratches, and the accuracy was improved.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Development of a Vision System for the Complete Inspection of CO2 Welding Equipment of Automotive Body Parts (자동차 차체부품 CO2용접설비 전수검사용 비전시스템 개발)

  • Ju-Young Kim;Min-Kyu Kim
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.3
    • /
    • pp.179-184
    • /
    • 2024
  • In the car industry, welding is a fundamental linking technique used for joining components, such as steel, molds, and automobile parts. However, accurate inspection is required to test the reliability of the welding components. In this study, we investigate the detection of weld beads using 2D image processing in an automatic recognition system. The sample image is obtained using a 2D vision camera embedded in a lighting system, from where a portion of the bead is successfully extracted after image processing. In this process, the soot removal algorithm plays an important role in accurate weld bead detection, and adopts adaptive local gamma correction and gray color coordinates. Using this automatic recognition system, geometric parameters of the weld bead, such as its length, width, angle, and defect size can also be defined. Finally, on comparing the obtained data with the industrial standards, we can determine whether the weld bead is at an acceptable level or not.

A vision-based system for long-distance remote monitoring of dynamic displacement: experimental verification on a supertall structure

  • Ni, Yi-Qing;Wang, You-Wu;Liao, Wei-Yang;Chen, Wei-Huan
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.769-781
    • /
    • 2019
  • Dynamic displacement response of civil structures is an important index for in-construction and in-service structural condition assessment. However, accurately measuring the displacement of large-scale civil structures such as high-rise buildings still remains as a challenging task. In order to cope with this problem, a vision-based system with the use of industrial digital camera and image processing has been developed for long-distance, remote, and real-time monitoring of dynamic displacement of supertall structures. Instead of acquiring image signals, the proposed system traces only the coordinates of the target points, therefore enabling real-time monitoring and display of displacement responses in a relatively high sampling rate. This study addresses the in-situ experimental verification of the developed vision-based system on the Canton Tower of 600 m high. To facilitate the verification, a GPS system is used to calibrate/verify the structural displacement responses measured by the vision-based system. Meanwhile, an accelerometer deployed in the vicinity of the target point also provides frequency-domain information for comparison. Special attention has been given on understanding the influence of the surrounding light on the monitoring results. For this purpose, the experimental tests are conducted in daytime and nighttime through placing the vision-based system outside the tower (in a brilliant environment) and inside the tower (in a dark environment), respectively. The results indicate that the displacement response time histories monitored by the vision-based system not only match well with those acquired by the GPS receiver, but also have higher fidelity and are less noise-corrupted. In addition, the low-order modal frequencies of the building identified with use of the data obtained from the vision-based system are all in good agreement with those obtained from the accelerometer, the GPS receiver and an elaborate finite element model. Especially, the vision-based system placed at the bottom of the enclosed elevator shaft offers better monitoring data compared with the system placed outside the tower. Based on a wavelet filtering technique, the displacement response time histories obtained by the vision-based system are easily decomposed into two parts: a quasi-static ingredient primarily resulting from temperature variation and a dynamic component mainly caused by fluctuating wind load.