• Title/Summary/Keyword: Vision sensor

Search Result 833, Processing Time 0.027 seconds

Wireless Sensors Module for Remote Room Environment Monitoring

  • Lee, Dae-Seok;Chung, Wan-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.449-452
    • /
    • 2005
  • For home networking system with a function of air quality monitoring, a wireless sensor module with several air quality monitoring sensors was developed for indoor environment monitoring system in home networking. The module has various enlargements for various kinds of sensors such as humidity sensor, temperature sensor, CO2 sensor, flying dust sensor, and etc. The developed wireless module is very convenient to be installed on the wall of a room or office, and the sensors in the module can be easily replaced due to well designed module structure and RF connection method. To reduce the system cost, only one RF transmission block was used for sensors' signal transmission to 8051 microcontroller board in time sharing method. In this home networking system, various indoor environmental parameters could be monitored in real time from RF wireless sensor module. Indoor vision was transferred to client PC or PDA from surveillance camera installed indoor or desired site. Web server using Oracle DB was used for saving the visions by web-camera and various data from wireless sensor module.

  • PDF

Vision-Based Relative State Estimation Using the Unscented Kalman Filter

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.24-36
    • /
    • 2011
  • A new approach for spacecraft absolute attitude estimation based on the unscented Kalman filter (UKF) is extended to relative attitude estimation and navigation. This approach for nonlinear systems has faster convergence than the approach based on the standard extended Kalman filter (EKF) even with inaccurate initial conditions in attitude estimation and navigation problems. The filter formulation employs measurements obtained from a vision sensor to provide multiple line(-) of(-) sight vectors from the spacecraft to another spacecraft. The line-of-sight measurements are coupled with gyro measurements and dynamic models in an UKF to determine relative attitude, position and gyro biases. A vector of generalized Rodrigues parameters is used to represent the local error-quaternion between two spacecraft. A multiplicative quaternion-error approach is derived from the local error-quaternion, which guarantees the maintenance of quaternion unit constraint in the filter. The scenario for bounded relative motion is selected to verify this extended application of the UKF. Simulation results show that the UKF is more robust than the EKF under realistic initial attitude and navigation error conditions.

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Vision Sensor and Deep Learning-based Around View Monitoring System for Ship Berthing (비전 센서 및 딥러닝 기반 선박 접안을 위한 어라운드뷰 모니터링 시스템)

  • Kim, Hanguen;Kim, Donghoon;Park, Byeolteo;Lee, Seung-Mok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.71-78
    • /
    • 2020
  • This paper proposes vision sensors and deep learning-based around view monitoring system for ship berthing. Ship berthing to the port requires precise relative position and relative speed information between the mooring facility and the ship. For ships of Handysize or higher, the vesselships must be docked with the help of pilots and tugboats. In the case of ships handling dangerous cargo, tug boats push the ship and dock it in the port, using the distance and velocity information receiving from the berthing aid system (BAS). However, the existing BAS is very expensive and there is a limit on the size of the vessel that can be measured. Also, there is a limitation that it is difficult to measure distance and speed when there are obstacles near the port. This paper proposes a relative distance and speed estimation system that can be used as a ship berthing assist system. The proposed system is verified by comparing the performance with the existing laser-based distance and speed measurement system through the field tests at the actual port.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Development of a Vision System for the Complete Inspection of CO2 Welding Equipment of Automotive Body Parts (자동차 차체부품 CO2용접설비 전수검사용 비전시스템 개발)

  • Ju-Young Kim;Min-Kyu Kim
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.3
    • /
    • pp.179-184
    • /
    • 2024
  • In the car industry, welding is a fundamental linking technique used for joining components, such as steel, molds, and automobile parts. However, accurate inspection is required to test the reliability of the welding components. In this study, we investigate the detection of weld beads using 2D image processing in an automatic recognition system. The sample image is obtained using a 2D vision camera embedded in a lighting system, from where a portion of the bead is successfully extracted after image processing. In this process, the soot removal algorithm plays an important role in accurate weld bead detection, and adopts adaptive local gamma correction and gray color coordinates. Using this automatic recognition system, geometric parameters of the weld bead, such as its length, width, angle, and defect size can also be defined. Finally, on comparing the obtained data with the industrial standards, we can determine whether the weld bead is at an acceptable level or not.

Automation of a Teleoperated Microassembly Desktop Station Supervised by Virtual Reality

  • Antoine Ferreira;Fontaine, Jean-Guy;Shigeoki Hirai
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.23-31
    • /
    • 2002
  • We proposed a concept of a desktop micro device factory for visually servoed teleoperated microassembly assisted by a virtual reality (VR) interface. It is composed of two micromanipulators equipped with micro tools operating under a light microscope. First a manipulator, control method for the micro object to follow a planned trajectory in pushing operation is proposed undo. vision based-position control. Then, we present the cooperation control strategy of the micro handling operation under vision-based force control integrating a sensor fusion framework approach. A guiding-system based on virtual micro-world exactly reconstructed from the CAD-CAM databases of the real environment being considered is presented for the imprecisely calibrated micro world. Finally, some experimental results of microassembly tasks performed on millimeter-sized components are provided.

Hybrid Real-time Monitoring System Using2D Vision and 3D Action Recognition (2D 비전과 3D 동작인식을 결합한 하이브리드 실시간 모니터링 시스템)

  • Lim, Jong Heon;Sung, Man Kyu;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.583-598
    • /
    • 2015
  • We need many assembly lines to produce industrial product such as automobiles that require a lot of composited parts. Big portion of such assembly line are still operated by manual works of human. Such manual works sometimes cause critical error that may produce artifacts. Also, once the assembly is completed, it is really hard to verify whether of not the product has some error. In this paper, for monitoring behaviors of manual human work in an assembly line automatically, we proposes a realtime hybrid monitoring system that combines 2D vision sensor tracking technique with 3D motion recognition sensors.

A Study on Seam Tracking and Weld Defects Detecting for Automated Pipe Welding by Using Double Vision Sensors (파이프 용접에서 다중 시각센서를 이용한 용접선 추적 및 용접결함 측정에 관한 연구)

  • 송형진;이승기;강윤희;나석주
    • Journal of Welding and Joining
    • /
    • v.21 no.1
    • /
    • pp.60-65
    • /
    • 2003
  • At present. welding of most pipes with large diameter is carried out by the manual process. Automation of the welding process is necessary f3r the sake of consistent weld quality and improvement in productivity. In this study, two vision sensors, based on the optical triangulation, were used to obtain the information for seam tracking and detecting the weld defects. Through utilization of the vision sensors, noises were removed, images and 3D information obtained and positions of the feature points detected. The aforementioned process provided the seam and leg position data, calculated the magnitude of the gap, fillet area and leg length and judged the weld defects by ISO 5817. Noises in the images were removed by using the gradient values of the laser stripe's coordinates and various feature points were detected by using an algorithm based on the iterative polygon approximation method. Since the process time is very important, all the aforementioned processes should be conducted during welding.

Hardware Digital Color Enhancement for Color Vision Deficiencies

  • Chen, Yu-Chieh;Liao, Tai-Shan
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.71-77
    • /
    • 2011
  • Up to 10% of the global population suffers from color vision deficiency (CVD) [1], especially deuteranomaly and protanomaly, the conditions in which it is difficult to discriminate between red and green hues. For those who suffer from CVD, their career fields are restricted, and their childhood education is frustrating. There are many optical eye glasses on the market to compensate for this disability. However, although they are attractive due to their light weight, wearing these glasses will decrease visual brightness and cause problems at night. Therefore, this paper presents a supplementary device that comprises a head-mounted display and an image sensor. With the aid of the image processing technique of digital color space adjustment implemented in a high-speed field-programmable gate array device, the users can enjoy enhanced vision through the display without any decrease in brightness.