• 제목/요약/키워드: Yaw error

검색결과 124건 처리시간 0.022초

LMI-BASED $H_{\infty}$ LATERAL CONTROL OF AN AUTONOMUS VEHICLE BY LOOK-AHEAD SENSING

  • Kim, C.S.;Kim, S.Y.;Ryu, J.H.;Lee, M.H.
    • International Journal of Automotive Technology
    • /
    • 제7권5호
    • /
    • pp.609-618
    • /
    • 2006
  • This paper presents the lateral control of an autonomous vehicle by using a look-ahead sensing system. In look-ahead sensing by an absolute positioning system, a reference lane, constructed by straight lanes or circular lanes, was switched by a segment switching algorithm. To cope with sensor noise and modeling uncertainty, a robust LMI-based $H_{\infty}$ lateral controller was designed by the feedback of lateral offset and yaw angle error at the vehicle look-ahead. In order to verify the safety and the performance of lateral control, a scaled-down vehicle was developed and the location of the vehicle was detected by using an ultrasonic local positioning system. In the mechatronic scaled-down vehicle, the lateral model and parameters are verified and estimated by a J-turn test. For the lane change and reference lane tracking, the lateral controllers are used experimentally. The experimental results show that the $H_{\infty}$ controller is robust and has better performance compared with look-down sensing.

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제4권4호
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

Preliminary Test of Adaptive Neuro-Fuzzy Inference System Controller for Spacecraft Attitude Control

  • Kim, Sung-Woo;Park, Sang-Young;Park, Chan-Deok
    • Journal of Astronomy and Space Sciences
    • /
    • 제29권4호
    • /
    • pp.389-395
    • /
    • 2012
  • The problem of spacecraft attitude control is solved using an adaptive neuro-fuzzy inference system (ANFIS). An ANFIS produces a control signal for one of the three axes of a spacecraft's body frame, so in total three ANFISs are constructed for 3-axis attitude control. The fuzzy inference system of the ANFIS is initialized using a subtractive clustering method. The ANFIS is trained by a hybrid learning algorithm using the data obtained from attitude control simulations using state-dependent Riccati equation controller. The training data set for each axis is composed of state errors for 3 axes (roll, pitch, and yaw) and a control signal for one of the 3 axes. The stability region of the ANFIS controller is estimated numerically based on Lyapunov stability theory using a numerical method to calculate Jacobian matrix. To measure the performance of the ANFIS controller, root mean square error and correlation factor are used as performance indicators. The performance is tested on two ANFIS controllers trained in different conditions. The test results show that the performance indicators are proper in the sense that the ANFIS controller with the larger stability region provides better performance according to the performance indicators.

지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소 (Vision-based Reduction of Gyro Drift for Intelligent Vehicles)

  • 경민기;당 코이 누엔;강태삼;민덕기;이정욱
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

A Multistage In-flight Alignment with No Initial Attitude References for Strapdown Inertial Navigation Systems

  • Hong, WoonSeon;Park, Chan Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제18권3호
    • /
    • pp.565-573
    • /
    • 2017
  • This paper presents a multistage in-flight alignment (MIFA) method for a strapdown inertial navigation system (SDINS) suitable for moving vehicles with no initial attitude references. A SDINS mounted on a moving vehicle frequently loses attitude information for many reasons, and it makes solving navigation equations impossible because the true motion is coupled with an undefined vehicle attitude. To determine the attitude in such a situation, MIFA consists of three stages: a coarse horizontal attitude, coarse heading, and fine attitude with adaptive Kalman navigation filter (AKNF) in order. In the coarse horizontal alignment, the pitch and roll are coarsely estimated from the second order damping loop with an input of acceleration differences between the SDINS and GPS. To enhance estimation accuracy, the acceleration is smoothed by a scalar filter to reflect the true dynamics of a vehicle, and the effects of the scalar filter gains are analyzed. Then the coarse heading is determined from the GPS tracking angle and yaw increment of the SDINS. The attitude from these two stages is fed back to the initial values of the AKNF. To reduce the estimated bias errors of inertial sensors, special emphasis is given to the timing synchronization effects for the measurement of AKNF. With various real flight tests using an UH60 helicopter, it is proved that MIFA provides a dramatic position error improvement compared to the conventional gyro compass alignment.

템플릿 정합과 B-Spline 보간에 의한 3차원 광학 영상 처리 (3D Image Process by Template Matching and B-Spline Interpolations)

  • 양한진;주영훈
    • 한국지능시스템학회논문지
    • /
    • 제19권5호
    • /
    • pp.683-688
    • /
    • 2009
  • 본 논문에서는 비젼을 이용한 영상처리 기술을 기반으로 비접촉식 미세 측정 광학기에 의해 측정된 이미지를 템플릿 매칭과 B-Spline 보간법에 의해 보다 빠르고, 정밀한 복원 기법을 제안한다. 이를 위해 먼저 각각의 이미지로부터 매칭 템플릿과 피 매칭 템플릿을 검출한다. 그런 다음 기준면으로부터 두 이미지의 중첩되는 부분의 롤, 피치, 요 오차를 보정하여 정합시킨다. 그리고 B-Spline 보간법에 의해 정합된 부분을 연속화한다. 마지막으로, 제안된 방법은 실험을 통해 그 응용 가능성을 증명한다.

IMU 센서와 비전 시스템을 활용한 달 탐사 로버의 위치추정 알고리즘 (Localization Algorithm for Lunar Rover using IMU Sensor and Vision System)

  • 강호선;안종우;임현수;황슬우;천유영;김은한;이장명
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.65-73
    • /
    • 2019
  • In this paper, we propose an algorithm that estimates the location of lunar rover using IMU and vision system instead of the dead-reckoning method using IMU and encoder, which is difficult to estimate the exact distance due to the accumulated error and slip. First, in the lunar environment, magnetic fields are not uniform, unlike the Earth, so only acceleration and gyro sensor data were used for the localization. These data were applied to extended kalman filter to estimate Roll, Pitch, Yaw Euler angles of the exploration rover. Also, the lunar module has special color which can not be seen in the lunar environment. Therefore, the lunar module were correctly recognized by applying the HSV color filter to the stereo image taken by lunar rover. Then, the distance between the exploration rover and the lunar module was estimated through SIFT feature point matching algorithm and geometry. Finally, the estimated Euler angles and distances were used to estimate the current position of the rover from the lunar module. The performance of the proposed algorithm was been compared to the conventional algorithm to show the superiority of the proposed algorithm.

Hot Spot Detection of Thermal Infrared Image of Photovoltaic Power Station Based on Multi-Task Fusion

  • Xu Han;Xianhao Wang;Chong Chen;Gong Li;Changhao Piao
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.791-802
    • /
    • 2023
  • The manual inspection of photovoltaic (PV) panels to meet the requirements of inspection work for large-scale PV power plants is challenging. We present a hot spot detection and positioning method to detect hot spots in batches and locate their latitudes and longitudes. First, a network based on the YOLOv3 architecture was utilized to identify hot spots. The innovation is to modify the RU_1 unit in the YOLOv3 model for hot spot detection in the far field of view and add a neural network residual unit for fusion. In addition, because of the misidentification problem in the infrared images of the solar PV panels, the DeepLab v3+ model was adopted to segment the PV panels to filter out the misidentification caused by bright spots on the ground. Finally, the latitude and longitude of the hot spot are calculated according to the geometric positioning method utilizing known information such as the drone's yaw angle, shooting height, and lens field-of-view. The experimental results indicate that the hot spot recognition rate accuracy is above 98%. When keeping the drone 25 m off the ground, the hot spot positioning error is at the decimeter level.

간접 칼만 필터 기반의 센서융합을 이용한 실외 주행 이동로봇의 위치 추정 (Localization of Outdoor Wheeled Mobile Robots using Indirect Kalman Filter Based Sensor fusion)

  • 권지욱;박문수;김태은;좌동경;홍석교
    • 제어로봇시스템학회논문지
    • /
    • 제14권8호
    • /
    • pp.800-808
    • /
    • 2008
  • This paper presents a localization algorithm of the outdoor wheeled mobile robot using the sensor fusion method based on indirect Kalman filter(IKF). The wheeled mobile robot considered with in this paper is approximated to the two wheeled mobile robot. The mobile robot has the IMU and encoder sensor for inertia positioning system and GPS. Because the IMU and encoder sensor have bias errors, divergence of the estimated position from the measured data can occur when the mobile robot moves for a long time. Because of many natural and artificial conditions (i.e. atmosphere or GPS body itself), GPS has the maximum error about $10{\sim}20m$ when the mobile robot moves for a short time. Thus, the fusion algorithm of IMU, encoder sensor and GPS is needed. For the sensor fusion algorithm, we use IKF that estimates the errors of the position of the mobile robot. IKF proposed in this paper can be used other autonomous agents (i.e. UAV, UGV) because IKF in this paper use the position errors of the mobile robot. We can show the stability of the proposed sensor fusion method, using the fact that the covariance of error state of the IKF is bounded. To evaluate the performance of proposed algorithm, simulation and experimental results of IKF for the position(x-axis position, y-axis position, and yaw angle) of the outdoor wheeled mobile robot are presented.

손동작 영상획득을 이용한 최소침습수술로봇 무구속 마스터 인터페이스 (Non-restraint Master Interface of Minimally Invasive Surgical Robot Using Hand Motion Capture)

  • 장익규
    • 대한의용생체공학회:의공학회지
    • /
    • 제37권3호
    • /
    • pp.105-111
    • /
    • 2016
  • Introduction: Surgical robot is the alternative instrument that substitutes the difficult and precise surgical operation; should have intuitiveness operationally to transfer natural motions. There are limitations of hand motion derived from contacting mechanical handle in the surgical robot master interface such as mechanical singularity, isotropy, coupling problems. In this paper, we will confirm and verify the feasibility of intuitive Non-restraint master interface which tracking the hand motion using infra-red camera and only 3 reflective markers without the hardware handle for the surgical robot master interface. Materials & methods: We configured S/W and H/W system; arranged 6 infra-red cameras and attached 3 reflective markers on hands for measuring 3 dimensional coordinate then we find the 7 motions of grasp, yaw, pitch, roll, px, py, pz. And we connected Virtual-Master to the slave surgical robot(Laparobot) and observed the feasibility. To verify the result of motion, we compare the result of Non-restraint master and that of clinometer (and protractor) through measuring 0~180 degree, 10degree interval, 1000 samples and recorded standard deviation stands for error rate of the value. Results: We confirmed that the average angle values of Non-restraint master interface is accurately corresponds to the result of clinometer (and protractor) and have low error rates during motion. Investigation & Conclusion: In this paper, we confirmed the feasibility and accuracy of 3D Non-restraint master interface that can offer the intuitive motion of non-contact hardware handle. As a result, we can expect the high intuitiveness, dexterousness of surgical robot.