• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.025 seconds

Mixing Collaborative and Hybrid Vision Devices for Robotic Applications (로봇 응용을 위한 협력 및 결합 비전 시스템)

  • Bazin, Jean-Charles;Kim, Sung-Heum;Choi, Dong-Geol;Lee, Joon-Young;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.

A Study on Implementation for Real-time Lane Departure Warning System & Smart Night Vision Based on HDR Camera Platform (실시간 차선 이탈 경고 및 Smart Night Vision을 위한 HDR Camera Platform 구현에 관한 연구)

  • Park, Hwa-Beom;Park, Ge-O;Kim, Young-kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.123-126
    • /
    • 2017
  • The information and communication technology that is being developed recently has been greatly influencing the automobile market. In recent years, devices equipped with IT technology have been installed for the safety and convenience of the driver. However, it has the advantage of increased convenience as well as the disadvantage of increasing traffic accidents due to driver 's distraction. In order to prevent such accidents, it is necessary to develop safety systems of various types and ways. In this paper, we propose a method to implement a multi-function camera driving safety system that notifies a pedestrian and lane departure warning without using a radar sensor or a stereo video image, and a study on the analysis of a lane departure alarm software result.

  • PDF

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

An Automatic Weight Measurement of Rope Using Computer Vision

  • Joo, Ki-See
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.2 no.1
    • /
    • pp.141-146
    • /
    • 1998
  • Recently, the computer vision such as part measurement, and product inspection is very popular to achieve the factory automation since the labor cost is dramatically increasing. In this paper, the diameter and the length of rope are measured by CCD camera which is orthogonally mounted on the ceiling. Two parameters which are the diameter and the length of rope are used to measure the weight of rope. If the weight of rope is reached to predetermined weight, the information is transmitted to PLC(programmable logic control) to cut the rope on the wheel. The cutting machine cuts the rope according to the information obtained from the CCD camera. To measure the diameter and length of rope on real time, the searching space for image segmentation is restricted the predetermined area according to the camera calibration position. Finally, to estimate the weight of rope, the knowledge base system which depends on the diameter, the length of rope, and weight relation between these information are constructed according to diameters of rope. This method contributes to achieve the factory automation, and reduce the production cost since the operators are unnecessary to measure the weight of rope by try-and-error method.

  • PDF

Real time instruction classification system

  • Sang-Hoon Lee;Dong-Jin Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.212-220
    • /
    • 2024
  • A recently the advancement of society, AI technology has made significant strides, especially in the fields of computer vision and voice recognition. This study introduces a system that leverages these technologies to recognize users through a camera and relay commands within a vehicle based on voice commands. The system uses the YOLO (You Only Look Once) machine learning algorithm, widely used for object and entity recognition, to identify specific users. For voice command recognition, a machine learning model based on spectrogram voice analysis is employed to identify specific commands. This design aims to enhance security and convenience by preventing unauthorized access to vehicles and IoT devices by anyone other than registered users. We converts camera input data into YOLO system inputs to determine if it is a person, Additionally, it collects voice data through a microphone embedded in the device or computer, converting it into time-domain spectrogram data to be used as input for the voice recognition machine learning system. The input camera image data and voice data undergo inference tasks through pre-trained models, enabling the recognition of simple commands within a limited space based on the inference results. This study demonstrates the feasibility of constructing a device management system within a confined space that enhances security and user convenience through a simple real-time system model. Finally our work aims to provide practical solutions in various application fields, such as smart homes and autonomous vehicles.

Implementation of Line Scan Camera based Training Equipment for Technical Training of Automated Visual Inspection System (자동 시각 검사 시스템 기술훈련을 위한 라인스캔 카메라 기반의 실습장비 제작)

  • Ko, Jin-Seok;Mu, Xiang-Bin;Rheem, Jae-Yeol
    • Journal of Practical Engineering Education
    • /
    • v.6 no.1
    • /
    • pp.37-42
    • /
    • 2014
  • The automated visual inspection system (machine vision system) for quality assurance is important factory automation equipment in the manufacturing industries, such as display, semiconductor, etc. There is a lot of demand for the machine vision engineers. However, there are no technical training courses for machine vision technologies in vocational schools, colleges and universities. In this paper, we present the implementation of line scan camera based equipment for technical training of the automated visual inspection system. The training system consists of the X-Y stage which is widely used in machine vision industries and its variable image resolution are set to $10-30{\mu}m$. Additionally, this training system can attach the industrial illumination, either the direct illuminator or coaxial illuminator, for verifying the effect of illuminations. This means that the trainee can have a practical training in various equipment conditions and the training system is similar to the automated visual inspection system in industries.

Development of the Mechenical System and Vision Algorithm for the External Appearance Test Using Vision Image Processing (비전 이미지 프로세싱을 이용한 외관검사가 가능한 기계시스템 및 비전 알고리즘 개발)

  • Kim, Young-Choon;Kim, Young-Man;Kim, Sung-Gil;Kim, Hong-Bae;Cho, Moon-Taek
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.2
    • /
    • pp.202-208
    • /
    • 2016
  • In this study, the defect in connection with a C-tray was inspected using a low-cost camera. The four test items were the device overlapping in the tray, the bending of the tray, the loaded quantity of the tray, and the device pocket leaving, an algorithm was developed for defining and detecting the above defect types. Therefore, the developed handling system could extend the application of the stack of the c-tray and provide a quantity verification inspection on the packing processing. The machine operation control program, which can ensure the optimal inspection image to match the scan speed, was developed and the control program that can process the user gui and the vision image utilizing the control was developed. Overall, a mechanical system that is practicable for obtaining an image and processing the vision data was designed.

Machine Vision Technique for Rapid Measurement of Soybean Seed Vigor

  • Lee, Hoonsoo;Huy, Tran Quoc;Park, Eunsoo;Bae, Hyung-Jin;Baek, Insuck;Kim, Moon S.;Mo, Changyeun;Cho, Byoung-Kwan
    • Journal of Biosystems Engineering
    • /
    • v.42 no.3
    • /
    • pp.227-233
    • /
    • 2017
  • Purpose: Morphological properties of soybean roots are important indicators of the vigor of the seed, which determines the survival rate of the seedlings grown. The current vigor test for soybean seeds is manual measurement with the human eye. This study describes an application of a machine vision technique for rapid measurement of soybean seed vigor to replace the time-consuming and labor-intensive conventional method. Methods: A CCD camera was used to obtain color images of seeds during germination. Image processing techniques were used to obtain root segmentation. The various morphological parameters, such as primary root length, total root length, total surface area, average diameter, and branching points of roots were calculated from a root skeleton image using a customized pixel-based image processing algorithm. Results: The measurement accuracy of the machine vision system ranged from 92.6% to 98.8%, with accuracies of 96.2% for primary root length and 96.4% for total root length, compared to manual measurement. The correlation coefficient for each measurement was 0.999 with a standard error of prediction of 1.16 mm for primary root length and 0.97 mm for total root length. Conclusions: The developed machine vision system showed good performance for the morphological measurement of soybean roots. This image analysis algorithm, combined with a simple color camera, can be used as an alternative to the conventional seed vigor test method.

Development of vision system for the steel materials management in the slab line (철강 슬라브 소재 관리용 비전시스템 개발)

  • Park Sang-Gug;Lee Moon-Rak
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.809-812
    • /
    • 2006
  • This paper describes about the vision system, which was developed for the recognition of material management characters in the slab processing line. The material management characters, which are marked at the surface of a slab, are recognized by real time processing before slab moves to the next hot strip line. The vision system for the character recognition include that CCD camera system which acquire slab image, image transmission system which transmit captured image to the long distance, I/O devices for the interface with peripheral control system. We have installed vision system to the slab processing line and tested. Through the testing, we have checked durability, reliability and recognition rate of our system. In the results, we have confirmed that our system have good performance and higher recognition ability.

  • PDF

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.