• 제목/요약/키워드: Robust Robot Control

검색결과 456건 처리시간 0.025초

코드를 이용한 초음파 동시구동 시스템 (Simultaneous Driving System of Ultrasonic Sensors Using Codes)

  • 김춘승;최병준;이상룡;이연정
    • 제어로봇시스템학회논문지
    • /
    • 제10권11호
    • /
    • pp.1028-1036
    • /
    • 2004
  • Ultrasonic sensors are widely used in mobile robot applications to recognize external environments by virtue that they are cheap, easy to use, and robust under varying lighting conditions. In most cases, a single ultrasonic sensor is used to measure the distance to an object based on time-of-flight (TOF) information, whereas multiple sensors are used to recognize the shape of an object, such as a comer, plane, or edge. However, the conventional sequential driving technique involves a long measurement time. This problem can be resolved by pulse coding of ultrasonic signals, which allows multi-sensors to be emitted simultaneously and adjacent objects to be distinguished. Accordingly, this paper presents a new simultaneous coded driving system for an ultrasonic sensor array for object recognition in autonomous mobile robots. The proposed system is designed and implemented. A micro-controller unit is implemented using a DSP, Polaroid 6500 ranging modules are modified for firing the coded signals, and a 5-channel coded signal generating board is made using a FPGA. To verify the proposed method, experiments were conducted in an environment with overlapping signals, and the flight distances fur each sensor were obtained from the received overlapping signals using correlations and conversion to a bipolar PCM-NRZ signal.

외곽선 영상과 Support Vector Machine 기반의 문고리 인식을 이용한 문 탐지 (Door Detection with Door Handle Recognition based on Contour Image and Support Vector Machine)

  • 이동욱;박중태;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제16권12호
    • /
    • pp.1226-1232
    • /
    • 2010
  • A door can serve as a feature for place classification and localization for navigation of a mobile robot in indoor environments. This paper proposes a door detection method based on the recognition of various door handles using the general Hough transform (GHT) and support vector machine (SVM). The contour and color histogram of a door handle extracted from the database are used in GHT and SVM, respectively. The door recognition scheme consists of four steps. The first step determines the region of interest (ROI) images defined by the color information and the environment around the door handle for stable recognition. In the second step, the door handle is recognized using the GHT method from the ROI image and the image patches are extracted from the position of the recognized door handle. In the third step, the extracted patch is classified whether it is the image patch of a door handle or not using the SVM classifier. The door position is probabilistically determined by the recognized door handle. Experimental results show that the proposed method can recognize various door handles and detect doors in a robust manner.

지역 및 전역 환경에 대한 세선화 기반 위상지도의 작성 (Thinning-Based Topological Map Building for Local and Global Environments)

  • 권태범;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제12권7호
    • /
    • pp.693-699
    • /
    • 2006
  • An accurate and compact map is essential to an autonomous mobile robot system. For navigation, it is efficient to use an occupancy grid map because the environment is represented by probability distribution. But it is difficult to apply it to the large environment since it needs a large amount of memory proportional to the environment size. As an alternative, a topological map can be used to represent it in terms of the discrete nodes with edges connecting them. It is usually constructed by the Voronoi-like graphs, but in this paper the topological map is incrementally built based on the local grid map using the thinning algorithm. This algorithm can extract only meaningful topological information by using the C-obstacle concept in real-time and is robust to the environment change, because its underlying local grid map is constructed based on the Bayesian update formula. In this paper, the position probability is defined to evaluate the quantitative reliability of the end nodes of this thinning-based topological map (TTM). The global TTM can be constructed by merging each local TTM by matching the reliable end nodes determined by the position probability. It is shown that the proposed TTM can represent the environment accurately in real-time and it is readily extended to the global TTM.

천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정 (Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion)

  • 신옥식;박찬국
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

TWR 기반 고정밀 측위를 위한 단일 이상측정치 제거 기술 (Single Outlier Removal Technology for TWR based High Precision Localization)

  • 이창은;성태경
    • 로봇학회논문지
    • /
    • 제12권3호
    • /
    • pp.350-355
    • /
    • 2017
  • UWB (Ultra Wide Band) refers to a system with a bandwidth of over 500 MHz or a bandwidth of 20% of the center frequency. It is robust against channel fading and has a wide signal bandwidth. Using the IR-UWB based ranging system, it is possible to obtain decimeter-level ranging accuracy. Furthermore, IR-UWB system enables acquisition over glass or cement with high resolution. In recent years, IR-UWB-based ranging chipsets have become cheap and popular, and it has become possible to implement positioning systems of several tens of centimeters. The system can be configured as one-way ranging (OWR) positioning system for fast ranging and TWR (two-way ranging) positioning system for cheap and robust ranging. On the other hand, the ranging based positioning system has a limitation on the number of terminals for localization because it takes time to perform a communication procedure to perform ranging. To overcome this problem, code multiplexing and channel multiplexing are performed. However, errors occur in measurement due to interference between channels and code, multipath, and so on. The measurement filtering is used to reduce the measurement error, but more fundamentally, techniques for removing these measurements should be studied. First, the TWR based positioning was analyzed from a stochastic point of view and the effects of outlier measurements were summarized. The positioning algorithm for analytically identifying and removing single outlier is summarized and extended to three dimensions. Through the simulation, we have verified the algorithm to detect and remove single outliers.

Simultaneous and Multi-frequency Driving System of Ultrasonic Sensor Array for Object Recognition

  • Park, S.C.;Choi, B.J.;Lee, Y.J.;Lee, S.R.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.582-587
    • /
    • 2004
  • Ultrasonic sensors are widely used in mobile robot applications to recognize external environments, because they are cheap, easy to use, and robust under varying lighting conditions. However, the recognition of objects using a ultrasonic sensor is not so easy due to its characteristics such as narrow beam width and no reflected signal from a inclined object. As one of the alternatives to resolve these problems, use of multiple sensors has been studied. A sequential driving system needs a long measurement time and does not take advantage of multiple sensors. Simultaneous and pulse coding driving system of ultrasonic sensor array cannot measure short distance as the length of the code becomes long. This problem can be resolved by multi-frequency driving of ultrasonic sensors, which allows multi-sensors to be fired simultaneously and adjacent objects to be distinguished. Accordingly, this paper presents a simultaneous and multi-frequency driving system for an ultrasonic sensor array for object recognition. The proposed system is designed and implemented using a DSP and FPGA. A micro-controller board is made using a DSP, Polaroid 6500 ranging modules are modified for firing the multi-frequency signals, and a 5-channel frequency modulated signal generating board is made using a FPGA. To verify the proposed method, experiments were conducted in an environment with overlapping signals, and the flight distances for each sensor were obtained from filtering of the received overlapping signals and calculation of the time-of-flights.

  • PDF

모듈형 행동선택네트워크를 이용한 거울뉴런과 마음이론 기반의 의도대응 모델 (An Intention-Response Model based on Mirror Neuron and Theory of Mind using Modular Behavior Selection Networks)

  • 채유정;조성배
    • 정보과학회 논문지
    • /
    • 제42권3호
    • /
    • pp.320-327
    • /
    • 2015
  • 최근 다양한 분야에 서비스 로봇이 상용화되고 있지만 대부분의 로봇 에이전트는 사용자의 구체적인 명령에 의존적이고, 불안정한 센서정보를 기반으로 환경변화에 빠르게 대응하여 목적을 달성하기는 어려운 문제가 있다. 이러한 문제를 해결하기 위해, 본 논문은 사람이 타인의 의도를 이해하고 대응하는 과정을 설명하는 거울뉴런(mirror neuron)과 마음이론(theory of mind) 시스템을 모델링하고 로봇에이전트에 적용하여 유용성을 입증한다. 제안하는 의도-대응 모델은 거울뉴런의 빠르고 직관적인 대응행동과 중간목적 지향적인 특성을 구현하기 위해, 환경과 목적을 고려하는 행동선택 네트워크(behavior selection network)를 사용한다. 또한, 장기적인 행동계획을 기반으로 대응행동을 수행하는 마음이론 시스템을 수행하기 위해, 계층적 계획생성 기법을 이용하여 중간목적 단위로 행동을 계획하고 이를 기반으로 행동선택네트워크 모듈을 제어한다. 다양한 시나리오에 대해 실험한 결과 외부자극에 적절한 대응행동이 생성됨을 확인하였다.

도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘 (LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving)

  • 김종호;이호준;이경수
    • 자동차안전학회지
    • /
    • 제13권4호
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.

전자 튜너 조정을 위한 위치와 방향 인식 (Position and Orientation Recognition for Adjusting Electronic Tuners)

  • 양재호;공영준;이문규
    • 한국정밀공학회지
    • /
    • 제16권2호통권95호
    • /
    • pp.39-49
    • /
    • 1999
  • This paper describes the development of a vision-aided position and orientation recognition system for automatically adjusting electronic tuners which control the waveform by rotating variable resisters. The position and orientation recognition system estimates the center and the angle of the tuner grooves so that the main controller may correct the difference from the ideal position and thereby manipulate the variable resisters automatically. In this paper a robust algorithm is suggested which estimates the center and the angle of the tuner grooves fast and precisly from the source image with lighting variance and video noise. In the algorithm morphological filtering, 8-chain coding, and invariant moments are sequentially used to figure out image segments concerned. The performance of the proposed system was evaluated using a set of real specimens. The results indicate the system works well enough to be used practically in real manufacturing lines. If the system adopts a high speed frame grabber which enables real time image processing, it can also be applied to positioning of robot manipulators as well as automated PCB adjusters.

  • PDF