• 제목/요약/키워드: Color-based Vision System

Search Result 168, Processing Time 0.03 seconds

Deep Learning-Based Defects Detection Method of Expiration Date Printed In Product Package (딥러닝 기반의 제품 포장에 인쇄된 유통기한 결함 검출 방법)

  • Lee, Jong-woon;Jeong, Seung Su;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.463-465
    • /
    • 2021
  • Currently, the inspection method printed on food packages and boxes is to sample only a few products and inspect them with human eyes. Such a sampling inspection has the limitation that only a small number of products can be inspected. Therefore, accurate inspection using a camera is required. This paper proposes a deep learning object recognition technology model, which is an artificial intelligence technology, as a method for detecting the defects of expiration date printed on the product packaging. Using the Faster R-CNN (region convolution neural network) model, the color images, converted gray images, and converted binary images of the printed expiration date are trained and then tested, and each detection rates are compared. The detection performance of expiration date printed on the package by the proposed method showed the same detection performance as that of conventional vision-based inspection system.

  • PDF

Internet Based Tele-operation of the Autonomous Mobile Robot (인터넷을 통한 자율이동로봇 원격 제어)

  • Sim, Kwee-Bo;Byun, Kwang-Sub
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.6
    • /
    • pp.692-697
    • /
    • 2003
  • The researches on the Internet based tole-operation have received increased attention for the past few years. In this paper, we implement the Internet based tele-operating system. In order to transmit robustly the surroundings and control information of the robot, we make a data as a packet type. Also in order to transmit a very large image data, we use PEG compressive algorithm. The central problem in the Internet based tele-operation is the data transmission latency or data-loss. For this specific problem, we introduce an autonomous mobile robot with a 2-layer fuzzy controller. Also, we implement the color detection system and the robot can perceive the object. We verify the efficacy of the 2-layer fuzzy controller by applying it to a robot that is equipped with various input sensors. Because the 2-layer fuzzy controller can control robustly the robot with various inputs and outputs and the cost of control is low, we hope it will be applied to various sectors.

Interface of Tele-Task Operation for Automated Cultivation of Watermelon in Greenhouse

  • Kim, S.C.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.28 no.6
    • /
    • pp.511-516
    • /
    • 2003
  • Computer vision technology has been utilized as one of the most powerful tools to automate various agricultural operations. Though it has demonstrated successful results in various applications, the current status of technology is still for behind the human's capability typically for the unstructured and variable task environment. In this paper, a man-machine interactive hybrid decision-making system which utilized a concept of tole-operation was proposed to overcome limitations of computer image processing and cognitive capability. Tasks of greenhouse watermelon cultivation such as pruning, watering, pesticide application, and harvest require identification of target object. Identifying water-melons including position data from the field image is very difficult because of the ambiguity among stems, leaves, shades. and fruits, especially when watermelon is covered partly by leaves or stems. Watermelon identification from the cultivation field image transmitted by wireless was selected to realize the proposed concept. The system was designed such that operator(farmer), computer, and machinery share their roles utilizing their maximum merits to accomplish given tasks successfully. And the developed system was composed of the image monitoring and task control module, wireless remote image acquisition and data transmission module, and man-machine interface module. Once task was selected from the task control and monitoring module, the analog signal of the color image of the field was captured and transmitted to the host computer using R.F. module by wireless. Operator communicated with computer through touch screen interface. And then a sequence of algorithms to identify the location and size of the watermelon was performed based on the local image processing. And the system showed practical and feasible way of automation for the volatile bio-production process.

Unconstrained e-Book Control Program by Detecting Facial Characteristic Point and Tracking in Real-time (얼굴의 특이점 검출 및 실시간 추적을 이용한 e-Book 제어)

  • Kim, Hyun-Woo;Park, Joo-Yong;Lee, Jeong-Jick;Yoon, Young-Ro
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.2
    • /
    • pp.14-18
    • /
    • 2014
  • This study is about e-Book program based on human-computer interaction(HCI) system for physically handicapped person. By acquiring background knowledge of HCI, we know that if we use vision-based interface we can replace current computer input devices by extracting any characteristic point and tracing it. We decided betweeneyes as a characteristic point by analyzing facial input image using webcam. But because of three-dimensional structure of glasses, the person who is wearing glasses wasn't suitable for tracing between-eyes. So we changed characteristic point to the bridge of the nose after detecting between-eyes. By using this technique, we could trace rotation of head in real-time regardless of glasses. To test this program's usefulness, we conducted an experiment to analyze the test result on actual application. Consequently, we got 96.5% rate of success for controlling e-Book under proper condition by analyzing the test result of 20 subjects.

A Remote Control of 6 d.o.f. Robot Arm Based on 2D Vision Sensor (2D 영상센서 기반 6축 로봇 팔 원격제어)

  • Hyun, Woong-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.933-940
    • /
    • 2022
  • In this paper, the algorithm was developed to recognize hand 3D position through 2D image sensor and implemented a system to remotely control the 6 d.o.f. robot arm by using it. The system consists of a camera that acquires hand position in 2D, a computer that controls robot arm that performs movement by hand position recognition. The image sensor recognizes the specific color of the glove putting on operator's hand and outputs the recognized range and position by including the color area of the glove as a shape of rectangle. We recognize the velocity vector of end effector and control the robot arm by the output data of the position and size of the detected rectangle. Through the several experiments using developed 6 axis robot, it was confirmed that the 6 d.o.f. robot arm remote control was successfully performed.

Mobile Robot Control using Hand Shape Recognition (손 모양 인식을 이용한 모바일 로봇제어)

  • Kim, Young-Rae;Kim, Eun-Yi;Chang, Jae-Sik;Park, Se-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.4
    • /
    • pp.34-40
    • /
    • 2008
  • This paper presents a vision based walking robot control system using hand shape recognition. To recognize hand shapes, the accurate hand boundary needs to be tracked in image obtained from moving camera. For this, we use an active contour model-based tracking approach with mean shift which reduces dependency of the active contour model to location of initial curve. The proposed system is composed of four modules: a hand detector, a hand tracker, a hand shape recognizer and a robot controller. The hand detector detects a skin color region, which has a specific shape, as hand in an image. Then, the hand tracking is performed using an active contour model with mean shift. Thereafter the hand shape recognition is performed using Hue moments. To assess the validity of the proposed system we tested the proposed system to a walking robot, RCB-1. The experimental results show the effectiveness of the proposed system.

Traffic Object Tracking Based on an Adaptive Fusion Framework for Discriminative Attributes (차별적인 영상특징들에 적응 가능한 융합구조에 의한 도로상의 물체추적)

  • Kim Sam-Yong;Oh Se-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.5 s.311
    • /
    • pp.1-9
    • /
    • 2006
  • Because most applications of vision-based object tracking demonstrate satisfactory operations only under very constrained environments that have simplifying assumptions or specific visual attributes, these approaches can't track target objects for the highly variable, unstructured, and dynamic environments like a traffic scene. An adaptive fusion framework is essential that takes advantage of the richness of visual information such as color, appearance shape and so on, especially at cluttered and dynamically changing scenes with partial occlusion[1]. This paper develops a particle filter based adaptive fusion framework and improves the robustness and adaptation of this framework by adding a new distinctive visual attribute, an image feature descriptor using SIFT (Scale Invariant Feature Transform)[2] and adding an automatic teaming scheme of the SIFT feature library according to viewpoint, illumination, and background change. The proposed algorithm is applied to track various traffic objects like vehicles, pedestrians, and bikes in a driver assistance system as an important component of the Intelligent Transportation System.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Active Object Tracking System based on Stereo Vision (스테레오 비젼 기반의 능동형 물체 추적 시스템)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.4
    • /
    • pp.159-166
    • /
    • 2016
  • In this paper, an active object tracking system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and phase-type correlation scheme and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time. Basing on these extracted data the pan/tilted-embedded stereo camera system is adaptively controlled and as a result, the proposed system can track the target adaptively under the various circumstance of the target. From some experiments using 480 frames of the test input stereo image, it is analyzed that a standard variation between the measured and computed the estimated target's height and an error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 1.03 and 1.18% on average, respectively. From these good experimental results a possibility of implementing a new real-time intelligent stereo target tracking and surveillance system using the proposed scheme is finally suggested.

The Hand Region Acquistion System for Gesture-based Interface (제스처 기반 인터페이스를 위한 손영역 획득 시스템)

  • 양선옥;고일주;최형일
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.4
    • /
    • pp.43-52
    • /
    • 1998
  • We extract a hand region by using color information, which is an important feature for human vision to distinguish objects. Because pixel values in images are changed according to the luminance and lighting source, it is difficult to extract a hand region exactly without previous knowledge. We generate a hand skin model at learning stage, and extract a hand region from images by using the model. We also use a Kalman filter to consider changes of pixel values in a hand skin model. A Kalman filter restricts a search area for extracting a hand region at next frame also. The validity of the proposed method is proved by implementing the hand-region acquisition module.

  • PDF