• 제목/요약/키워드: Mobile robot control

검색결과 1,462건 처리시간 0.023초

퍼지 제어기를 이용한 실시간 이동 물체 추적에 관한 연구 (Study on the Real-Time Moving Object Tracking using Fuzzy Controller)

  • 김관형;강성인;이재현
    • 한국정보통신학회논문지
    • /
    • 제10권1호
    • /
    • pp.191-196
    • /
    • 2006
  • 본 논문에서는 비젼 시스템을 이용하여 이동 물체를 추적하는 방법을 제안하였다. 이동 물체를 계속적으로 추적하기 위해서는 이동 물체의 영상이 화상의 중심점 부근에 위치하도록 해야 한다. 따라서 이동 물체의 영상이 화상의 중심점의 부근에 위치하도록 하기 위하여 팬/틸트(Pan/Tilt)구조의 카메라 모듈을 제어하는 퍼지 제어기를 구현하였다. 향후, 시스템을 이동로봇에 적용하기 위하여 비젼 시스템을 위한 영상처리보드를 설계 제작하였고, 대상물체의 색상과 형태를 파악한 후 퍼지 제어기를 이용하여 카메라모듈이 물체를 추적할 수 있도록 StrongArm 보드를 이용하여 구성하였다. 그리고, 실험에 의해서 제안된 퍼지 제어기 가 실시간 이동물체 추적 시스템에 적용 가능함을 확인 하였다.

모폴로지 기반의 차영상 분석기법을 이용한 균열검출의 인식 (The Recognition of Crack Detection Using Difference Image Analysis Method based on Morphology)

  • 변태모;김장형;김형수
    • 한국정보통신학회논문지
    • /
    • 제10권1호
    • /
    • pp.197-205
    • /
    • 2006
  • 본 논문에서는 비젼 시스템을 이용하여 이동 물체를 추적하는 방법을 제안하였다. 이동 물체를 계속적으로 추적하기 위해서는 이동 물체의 영상이 화상의 중심점 부근에 위치하도록 해야 한다. 따라서 이동 물체의 영상이 화상의 중심점의 부근에 위치하도록 하기 위하여 팬/틸트(Pan/Tilt)구조의 카메라 모듈을 제어하는 퍼지 제어기를 구현하였다. 향후, 시스템을 이동로봇에 적용하기 위하여 비젼 시스템을 위한 영상처리보드를 설계 제작하였고, 대상물체의 색상과 형태를 파악한 후 퍼지 제어기를 이용하여 카메라모듈이 물체를 추적할 수 있도록 StrongArm 보드를 이용하여 구성하였다. 그리고, 실험에 의해서 제안된 퍼지 제어기 가 실시간 이동물체 추적 시스템에 적용 가능함을 확인 하였다.

듀얼 확장 칼만 필터를 이용한 쿼드로터 비행로봇 위치 정밀도 향상 알고리즘 개발 (Precise Positioning Algorithm Development for Quadrotor Flying Robots Using Dual Extended Kalman Filter)

  • 승지훈;이덕진;류지형;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제19권2호
    • /
    • pp.158-163
    • /
    • 2013
  • The fusion of the GPS (Global Positioning System) and DR (Dead Reckoning) is widely used for position and latitude estimation of vehicles such as a mobile robot, aerial vehicle and marine vehicle. Among the many types of aerial vehicles, grater focus is given on the quad-rotor and accuracy of the position information is becoming more important. In order to exactly estimate the position information, we propose the fusion method of GPS and Gyroscope sensor using the DEKF (Dual Extended Kalman Filter). The DEKF has an advantage of simultaneously estimating state value and a parameter of dynamical system. It can also be used even if state value is not available. In order to analyze the performance of DEKF, the computer simulation for estimating the position, the velocity and the angle in a circle trajectory of quad-rotor was done. As it can be seen from the simulation results using own proposed DEKF instead of EKF on own fusion method in the navigation of a quad-rotor gave better performance values.

인지 무선 네트워크에서의 베이지안 추론 기반 다중로봇 위치 추정 기법 연구 (Localization Method for Multiple Robots Based on Bayesian Inference in Cognitive Radio Networks)

  • 김동구;박준구
    • 제어로봇시스템학회논문지
    • /
    • 제22권2호
    • /
    • pp.104-109
    • /
    • 2016
  • In this paper, a localization method for multiple robots based on Bayesian inference is proposed when multiple robots adopting multi-RAT (Radio Access Technology) communications exist in cognitive radio networks. Multiple robots are separately defined by primary and secondary users as in conventional mobile communications system. In addition, the heterogeneous spectrum environment is considered in this paper. To improve the performance of localization for multiple robots, a realistic multiple primary user distribution is explained by using the probabilistic graphical model, and then we introduce the Gibbs sampler strategy based on Bayesian inference. In addition, the secondary user selection minimizing the value of GDOP (Geometric Dilution of Precision) is also proposed in order to overcome the limitations of localization accuracy with Gibbs sampling. Via the simulation results, we can show that the proposed localization method based on GDOP enhances the accuracy of localization for multiple robots. Furthermore, it can also be verified from the simulation results that localization performance is significantly improved with increasing number of observation samples when the GDOP is considered.

AdaBoost 기반의 실시간 고속 얼굴검출 및 추적시스템의 개발 (AdaBoost-based Real-Time Face Detection & Tracking System)

  • 김정현;김진영;홍영진;권장우;강동중;노태정
    • 제어로봇시스템학회논문지
    • /
    • 제13권11호
    • /
    • pp.1074-1081
    • /
    • 2007
  • This paper presents a method for real-time face detection and tracking which combined Adaboost and Camshift algorithm. Adaboost algorithm is a method which selects an important feature called weak classifier among many possible image features by tuning weight of each feature from learning candidates. Even though excellent performance extracting the object, computing time of the algorithm is very high with window size of multi-scale to search image region. So direct application of the method is not easy for real-time tasks such as multi-task OS, robot, and mobile environment. But CAMshift method is an improvement of Mean-shift algorithm for the video streaming environment and track the interesting object at high speed based on hue value of the target region. The detection efficiency of the method is not good for environment of dynamic illumination. We propose a combined method of Adaboost and CAMshift to improve the computing speed with good face detection performance. The method was proved for real image sequences including single and more faces.

적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발 (3D Range Measurement using Infrared Light and a Camera)

  • 김인철;이수용
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

외곽선 영상과 Support Vector Machine 기반의 문고리 인식을 이용한 문 탐지 (Door Detection with Door Handle Recognition based on Contour Image and Support Vector Machine)

  • 이동욱;박중태;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제16권12호
    • /
    • pp.1226-1232
    • /
    • 2010
  • A door can serve as a feature for place classification and localization for navigation of a mobile robot in indoor environments. This paper proposes a door detection method based on the recognition of various door handles using the general Hough transform (GHT) and support vector machine (SVM). The contour and color histogram of a door handle extracted from the database are used in GHT and SVM, respectively. The door recognition scheme consists of four steps. The first step determines the region of interest (ROI) images defined by the color information and the environment around the door handle for stable recognition. In the second step, the door handle is recognized using the GHT method from the ROI image and the image patches are extracted from the position of the recognized door handle. In the third step, the extracted patch is classified whether it is the image patch of a door handle or not using the SVM classifier. The door position is probabilistically determined by the recognized door handle. Experimental results show that the proposed method can recognize various door handles and detect doors in a robust manner.

지역 및 전역 환경에 대한 세선화 기반 위상지도의 작성 (Thinning-Based Topological Map Building for Local and Global Environments)

  • 권태범;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제12권7호
    • /
    • pp.693-699
    • /
    • 2006
  • An accurate and compact map is essential to an autonomous mobile robot system. For navigation, it is efficient to use an occupancy grid map because the environment is represented by probability distribution. But it is difficult to apply it to the large environment since it needs a large amount of memory proportional to the environment size. As an alternative, a topological map can be used to represent it in terms of the discrete nodes with edges connecting them. It is usually constructed by the Voronoi-like graphs, but in this paper the topological map is incrementally built based on the local grid map using the thinning algorithm. This algorithm can extract only meaningful topological information by using the C-obstacle concept in real-time and is robust to the environment change, because its underlying local grid map is constructed based on the Bayesian update formula. In this paper, the position probability is defined to evaluate the quantitative reliability of the end nodes of this thinning-based topological map (TTM). The global TTM can be constructed by merging each local TTM by matching the reliable end nodes determined by the position probability. It is shown that the proposed TTM can represent the environment accurately in real-time and it is readily extended to the global TTM.

천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정 (Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion)

  • 신옥식;박찬국
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

단일 비전에서 칼만 필티와 차선 검출 필터를 이용한 모빌 로봇 주행 위치.자세 계측 제어에 관한 연구 (A Study on Measurement and Control of position and pose of Mobile Robot using Ka13nan Filter and using lane detecting filter in monocular Vision)

  • 이용구;송현승;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.81-81
    • /
    • 2000
  • We use camera to apply human vision system in measurement. To do that, we need to know about camera parameters. The camera parameters are consisted of internal parameters and external parameters. we can fix scale factor&focal length in internal parameters, we can acquire external parameters. And we want to use these parameters in automatically driven vehicle by using camera. When we observe an camera parameters in respect with that the external parameters are important parameters. We can acquire external parameter as fixing focal length&scale factor. To get lane coordinate in image, we propose a lane detection filter. After searching lanes, we can seek vanishing point. And then y-axis seek y-sxis rotation component(${\beta}$). By using these parameter, we can find x-axis translation component(Xo). Before we make stepping motor rotate to be y-axis rotation component(${\beta}$), '0', we estimate image coordinates of lane at (t+1). Using this point, we apply this system to Kalman filter. And then we calculate to new parameters whick make minimum error.

  • PDF