• Title/Summary/Keyword: mobile vision system

Search Result 292, Processing Time 0.026 seconds

A Study on Object Tracking for Autonomous Mobile Robot using Vision Information (비젼 정보를 이용한 이동 자율로봇의 물체 추적에 관한 연구)

  • Kang, Jin-Gu;Lee, Jang-Myung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.235-242
    • /
    • 2008
  • An Autonomous mobile robot is a very useful system to achieve various tasks in dangerous environment, because it has the higher performance than a fixed base manipulator in terms of its operational workspace size as well as efficiency. A method for estimating the position of an object in the Cartesian coordinate system based upon the geometrical relationship between the image captured by 2-DOF active camera mounted on mobile robot and the real object, is proposed. With this position estimation, a method of determining an optimal path for the autonomous mobile robot from the current position to the position of object estimated by the image information using homogeneous matrices. Finally, the corresponding joint parameters to make the desired displacement are calculated to capture the object through the control of a mobile robot. The effectiveness of proposed method is demonstrated by the simulation and real experiments using the autonomous mobile robot.

  • PDF

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot (이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출)

  • Kwon, Oh-Sang
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

Posture Stabilization Control for Mobile Robot using Marker Recognition and Hybrid Visual Servoing (마커인식과 혼합 비주얼 서보잉 기법을 통한 이동로봇의 자세 안정화 제어)

  • Lee, Sung-Goo;Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.8
    • /
    • pp.1577-1585
    • /
    • 2011
  • This paper proposes a posture stabilization control algorithm for a wheeled mobile robot using hybrid visual servo control method with a position based and an image based visual servoing (PBVS and IBVS). To overcome chattering phenomena which were shown in the previous researches using a simple switching function based on a threshold, the proposed hybrid visual servo control law introduces the fusion function based on a blending function. Then, the chattering problem and rapid motion of the mobile robot can be eliminated. Also, we consider the nonlinearity of the wheeled mobile robot unlike the previous visual servo control laws using linear control methods to improve the performances of the visual servo control law. The proposed posture stabilization control law using hybrid visual servoing is verified by a theoretical analysis and simulation and experimental results.

Development of a magnetic caterpillar based robot for autonomous scanning in the weldment (용접부 자동 탐상을 위한 이동 로봇의 개발)

  • 장준우;정경민;김호철;이정기
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.713-716
    • /
    • 2000
  • In this study, we present a mobile robot for ultrasonic scanning of weldment. magnetic Caterpillar mechanism is selected in order to travel on the inclined surface and vertical wall. A motion control board and motor driver are developed to control four DC-servo motors. A virtual device driver is also developed for the purpose of communicating between the control board and a host PC with Dual 'port ram. To provide the mobile robot with stable and accurate movement, PID control algorithm is applied to the mobile robot control. And a vision system for detecting the weld-line are developed with laser slit beam as a light source. In the experiments, movement of the mobile robot is tested inclined on a surface and a vertical wall.

  • PDF

Face-Mask Detection with Micro processor (마이크로프로세서 기반의 얼굴 마스크 감지)

  • Lim, Hyunkeun;Ryoo, Sooyoung;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.490-493
    • /
    • 2021
  • This paper proposes an embedded system that detects mask and face recognition based on a microprocessor instead of Nvidia Jetson Board what is popular development kit. We use a class of efficient models called Mobilenets for mobile and embedded vision applications. MobileNets are based on a streamlined architechture that uses depthwise separable convolutions to build light weight deep neural networks. The device used a Maix development board with CNN hardware acceleration function, and the training model used MobileNet_V2 based SSD(Single Shot Multibox Detector) optimized for mobile devices. To make training model, 7553 face data from Kaggle are used. As a result of test dataset, the AUC (Area Under The Curve) value is as high as 0.98.

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.