• Title/Summary/Keyword: Mobile robot control

Search Result 1,464, Processing Time 0.031 seconds

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

A Study on Measurement and Control of position and pose of Mobile Robot using Ka13nan Filter and using lane detecting filter in monocular Vision (단일 비전에서 칼만 필티와 차선 검출 필터를 이용한 모빌 로봇 주행 위치.자세 계측 제어에 관한 연구)

  • 이용구;송현승;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.81-81
    • /
    • 2000
  • We use camera to apply human vision system in measurement. To do that, we need to know about camera parameters. The camera parameters are consisted of internal parameters and external parameters. we can fix scale factor&focal length in internal parameters, we can acquire external parameters. And we want to use these parameters in automatically driven vehicle by using camera. When we observe an camera parameters in respect with that the external parameters are important parameters. We can acquire external parameter as fixing focal length&scale factor. To get lane coordinate in image, we propose a lane detection filter. After searching lanes, we can seek vanishing point. And then y-axis seek y-sxis rotation component(${\beta}$). By using these parameter, we can find x-axis translation component(Xo). Before we make stepping motor rotate to be y-axis rotation component(${\beta}$), '0', we estimate image coordinates of lane at (t+1). Using this point, we apply this system to Kalman filter. And then we calculate to new parameters whick make minimum error.

  • PDF

Vision Inspection for Flexible Lens Assembly of Camera Phone (카메라 폰 렌즈 조립을 위한 비전 검사 방법들에 대한 연구)

  • Lee I.S.;Kim J.O.;Kang H.S.;Cho Y.J.;Lee G.B.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.631-632
    • /
    • 2006
  • The assembly of camera lens modules fur the mobile phone has not been automated so far. They are still assembled manually because of high precision of all parts and hard-to-recognize lens by vision camera. In addition, the very short life cycle of the camera phone lens requires flexible and intelligent automation. This study proposes a fast and accurate identification system of the parts by distributing the camera for 4 degree of freedom assembly robot system. Single or multi-cameras can be installed according to the part's image capture and processing mode. It has an agile structure which enables adaptation with the minimal job change. The framework is proposed and the experimental result is shown to prove the effectiveness.

  • PDF

Development of an Autonomous Worker-Following Transport Vehicle ( II ) - Supplementation of driving control system and field experiment - (농작업자 자동 추종 운반차 개발(II) - 주행제어시스템 보완 및 포장성능시험 -)

  • 권기영;정성림;강창호;손재룡;한길수;정석현;장익주
    • Journal of Biosystems Engineering
    • /
    • v.27 no.5
    • /
    • pp.417-424
    • /
    • 2002
  • This study was conducted to develop a vehicle, leading or following a worker at a certain distance to assist laborious transporting works in greenhouses. A prototype vehicle was tested in the practical field conditions using a developed control algorithm. Results of this study were summarized as following: 1. The sensing device consisted of infrared sensors was attached to the front of the vehicle and turning following algorithm was developed to make the vehicle turned as it follows a worker simultaneously. 2. The measured average power consumptions were 110W and 89W, equivalent to 5.2-6.4 hrs battery durations, at low speed with and without the maximum payload, respectively. 3. Results of the travel tests showed that the deviations from the center of row spacing were $\pm$100 mm along the ridge and $\pm$85 mm along the hydroponic bed in the greenhouse. Therefore, the worker-following transport vehicle was feasible to travel along the row without collision in the greenhouse.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

A Path & Velocity Profile Planning Based on A* Algorithm for Dynamic Environment (동적 환경을 위한 A* 알고리즘 기반의 경로 및 속도 프로파일 설계)

  • Kwon, Min-Hyeok;Kang, Yeon-Sik;Kim, Chang-Hwan;Park, Gwi-Tae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.5
    • /
    • pp.405-411
    • /
    • 2011
  • This paper presents a hierarchical trajectory planning method which can handle a collision-free of the planned path in complex and dynamic environments. A PV (Path & Velocity profile) planning method minimizes a sharp change of orientation and waiting time to avoid a collision with moving obstacle through detour path. The path generation problem is solved by three steps. In the first step, a smooth global path is generated using $A^*$ algorithm. The second step sets up the velocity profile for the optimization problem considering the maximum velocity and acceleration. In the third step, the velocity profile for obtaining the shortest path is optimized using the fuzzy and genetic algorithm. To show the validity and effectiveness of the proposed method, realistic simulations are performed.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

Multiple Path-planning of Unmanned Autonomous Forklift using Modified Genetic Algorithm and Fuzzy Inference system (수정된 유전자 알고리즘과 퍼지 추론 시스템을 이용한 무인 자율주행 이송장치의 다중경로계획)

  • Kim, Jung-Min;Heo, Jung-Min;Kim, Sung-Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1483-1490
    • /
    • 2009
  • This parer is presented multiple path-planning of unmanned autonomous forklift using modified genetic algorithm and fuzzy inference system. There are a task-level feedback method and a method that path is dynamically replaned in realtime while the autonomous vehicles are moving by means of an optimal algorithm for existing multiple path-planning. However, such methods cause malfunctions and inefficiency in the sense of time and energy, and path-planning should be dynamically replanned in realtime. To solve these problems, we propose multiple path-planning using modified genetic algorithm and fuzzy inference system and show the performance with autonomous vehicles. For experiment, we designed and built two autonomous mobile vehicles that equipped with the same driving control part used in actual autonomous forklift, and test the proposed multiple path-planning algorithm. Experimental result that actual autonomous mobile vehicle, we verified that fast optimized path-planning and efficient collision avoidance are possible.

A Research for Interface Based on EMG Pattern Combinations of Commercial Gesture Controller (상용 제스처 컨트롤러의 근전도 패턴 조합에 따른 인터페이스 연구)

  • Kim, Ki-Chang;Kang, Min-Sung;Ji, Chang-Uk;Ha, Ji-Woo;Sun, Dong-Ik;Xue, Gang;Shin, Kyoo-Sik
    • Journal of Engineering Education Research
    • /
    • v.19 no.1
    • /
    • pp.31-36
    • /
    • 2016
  • These days, ICT-related products are pouring out due to development of mobile technology and increase of smart phones. Among the ICT-related products, wearable devices are being spotlighted with the advent of hyper-connected society. In this paper, a body-attached type wearable device using EMG(electromyography) sensors is studied. The research field of EMG sensors is divided into two parts. One is medical area and another is control device area. This study corresponds to the latter that is a method of transmitting user's manipulation intention to robots, games or computers through the measurement of EMG. We used commercial device MYO developed by Thalmic Labs in Canada and matched up EMG of arm muscles with gesture controller. In the experiment part, first of all, various arm motions for controlling devices are defined. Finally, we drew several distinguishing kinds of motions through analysis of the EMG signals and substituted a joystick with the motions.