• Title/Summary/Keyword: Robot Control Data

Search Result 712, Processing Time 0.024 seconds

Reflexive Autonomous Vehicle Control Using Neural Networks (신경회로망을 이용한 반사적인 무인차 제어)

  • Kim, Yoo-Seok;Lee, Jang-Gyu
    • Proceedings of the KIEE Conference
    • /
    • 1991.07a
    • /
    • pp.888-891
    • /
    • 1991
  • In this paper, we have shown a new approach of neural networks for mobile robot motion control under an indoor refracted environment. The vehicle has two powered wheels and four passive casters which support a free motion. And it also uses sonar sensors, infrared sensors, Internal odometer, and contact sensors. Two experiments were conducted to demonstrate our objectives. The first one is that the vehicle executes a reflexive motor control to maintain a constant distance to the boundary. The second one is that as well as the boundary following, the vehicle makes a block obstacle avoidance during its path. Without prior knowledge of external environment. we have accomplished the tasks by employing a simple, reactive stimulus-response neural network scheme associating sensor data with the vehicle's action.

  • PDF

The Gripping Force Control of Robot Manipulator Using the Repeated Learning Function Techniques (반복 학습기능을 이용한 로봇 매니퓰레이터의 파지력제어)

  • Kim, Tea-Kwan;Baek, Seung-Hack;Kim, Tea-Soo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.18 no.1
    • /
    • pp.45-52
    • /
    • 2015
  • In this paper, the repeated learning technique of neural network was used for gripping force control algorithm. The hybrid control system was introduced and the manipulator's finger reorganized form 2 ea to 3 ea for comfortable gripping. The data was obtained using the gripping force of repeated learning techniques. In the fucture, the adjustable gripping force will be obtained and improved the accuracy using the artificial intelligence techniques.

Real-time Robotic Vision Control Scheme Using Optimal Weighting Matrix for Slender Bar Placement Task (얇은 막대 배치작업을 위한 최적의 가중치 행렬을 사용한 실시간 로봇 비젼 제어기법)

  • Jang, Min Woo;Kim, Jae Myung;Jang, Wan Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.50-58
    • /
    • 2017
  • This paper proposes a real-time robotic vision control scheme using the weighting matrix to efficiently process the vision data obtained during robotic movement to a target. This scheme is based on the vision system model that can actively control the camera parameter and robotic position change over previous studies. The vision control algorithm involves parameter estimation, joint angle estimation, and weighting matrix models. To demonstrate the effectiveness of the proposed control scheme, this study is divided into two parts: not applying the weighting matrix and applying the weighting matrix to the vision data obtained while the camera is moving towards the target. Finally, the position accuracy of the two cases is compared by performing the slender bar placement task experimentally.

Localization and 3D Polygon Map Building Method with Kinect Depth Sensor for Indoor Mobile Robots (키넥트 거리센서를 이용한 실내 이동로봇의 위치인식 및 3 차원 다각평면 지도 작성)

  • Gwon, Dae-Hyeon;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.9
    • /
    • pp.745-752
    • /
    • 2016
  • We suggest an efficient Simultaneous Localization and 3D Polygon Map Building (SLAM) method with Kinect depth sensor for mobile robots in indoor environments. In this method, Kinect depth data is separated into row planes so that scan line segments are on each row plane. After grouping all scan line segments from all row planes into line groups, a set of 3D Scan polygons are fitted from each line group. A map matching algorithm then figures out pairs of scan polygons and existing map polygons in 3D, and localization is performed to record correct pose of the mobile robot. For 3D map-building, each 3D map polygon is created or updated by merging each matched 3D scan polygon, which considers scan and map edges efficiently. The validity of the proposed 3D SLAM algorithm is revealed via experiments.

Remote Dynamic Control of AM1 Robot Using Network (네트워크를 이용한 AM1 로봇의 원격 동적 제어)

  • Kim, Seong-Il;Yoon, Sin-Il;Bae, Gil-Ho;Lee, Jin;Han, Seong-Hyeon
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.04a
    • /
    • pp.556-560
    • /
    • 2002
  • In this paper, we propose a remote controller for robot manipulator using local area network(LAN) and internet. To do this, we develope a server-client system as used in the network field. The client system is in any computer in remote place for the user to log-in the server and manage the remote factory. the server system is a computer which controls the manipulator and waits for a access from client. The server system consists of several control algorithms which is needed to drive the manipulator and networking system to transfer images that shows states of the work place, and to receive a Tmp data to run the manipulator The client system consists of 3D(dimension) graphic user interface for teaching and off-line task like simulation, external hardware interface which makes it easier for the user to teach. Using this server-client system, the user who is on remote place can edit the work schedule of manipulator, then run the machine after it is transferred and monitor the results of the task.

  • PDF

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Recognizing User Engagement and Intentions based on the Annotations of an Interaction Video (상호작용 영상 주석 기반 사용자 참여도 및 의도 인식)

  • Jang, Minsu;Park, Cheonshu;Lee, Dae-Ha;Kim, Jaehong;Cho, Young-Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.612-618
    • /
    • 2014
  • A pattern classifier-based approach for recognizing internal states of human participants in interactions is presented along with its experimental results. The approach includes a step for collecting video recordings of human-human interactions or humanrobot interactions and subsequently analyzing the videos based on human coded annotations. The annotation includes social signals directly observed in the video recordings and the internal states of human participants indirectly inferred from those observed social signals. Then, a pattern classifier is trained using the annotation data, and tested. In our experiments on human-robot interaction, 7 video recordings were collected and annotated with 20 social signals and 7 internal states. Several experiments were performed to obtain an 84.83% recall rate for interaction engagement, 93% for concentration intention, and 81% for task comprehension level using a C4.5 based decision tree classifier.

The Analysis of Face Recognition Rate according to Distance and Interpolation using PCA in Surveillance System (감시카메라 시스템에서 PCA에 의한 보간법과 거리별 얼굴인식률 분석)

  • Moon, Hae-Min;Kwak, Keun-Chang;Pan, Sung-Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.6
    • /
    • pp.153-160
    • /
    • 2011
  • Recently, the use of security surveillance system including CCTV is increasing due to the increase of terrors and crimes. At the same time, interest of face recognition at a distance using surveillance cameras has been increasing. Accordingly, we analyzed the performance of face recognition according to distance using PCA-based face recognition and interpolation. In this paper, we used Nearest, Bilinear, Bicubic, Lanczos3 interpolations to interpolate face image. As a result, we confirmed that existing interpolation have an few effect on performance of PCA-based face recognition and performance of PCA-based face recognition is improved by including face image according to distance in traning data.

Cooperative Localization for Multiple Mobile Robots using Constraints Propagation Techniques on Intervals (제약 전파 기법을 적용한 다중 이동 로봇의 상호 협동 위치 추정)

  • Jo, Kyoung-Hwan;Jang, Choul-Soo;Lee, Ji-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.3
    • /
    • pp.273-283
    • /
    • 2008
  • This article describes a cooperative localization technique of multiple robots sharing position information of each robot. In case of conventional methods such as EKF, they need to linearization process. Consequently, they are not able to guarantee that their result is range containing true value. In this paper, we propose a method to merge the data of redundant sensors based on constraints propagation techniques on intervals. The proposed method has a merit guaranteeing true value. Especially, we apply the constraints propagation technique fusing wheel encoders, a gyro, and an inexpensive GPS receiver. In addition, we utilize the correlation between GPS data in common workspace to improve localization performance for multiple robots. Simulation results show that proposed method improve considerably localization performance of multiple robots.

A Study on the Relative Localization Algorithm for Mobile Robots using a Structured Light Technique (Structured Light 기법을 이용한 이동 로봇의 상대 위치 추정 알고리즘 연구)

  • Noh Dong-Ki;Kim Gon-Woo;Lee Beom-Hee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.8
    • /
    • pp.678-687
    • /
    • 2005
  • This paper describes a relative localization algorithm using odometry data and consecutive local maps. The purpose of this paper is the odometry error correction using the area matching of two consecutive local maps. The local map is built up using a sensor module with dual laser beams and USB camera. The range data form the sensor module is measured using the structured lighting technique (active stereo method). The advantage in using the sensor module is to be able to get a local map at once within the camera view angle. With this advantage, we propose the AVS (Aligned View Sector) matching algorithm for. correction of the pose error (translational and rotational error). In order to evaluate the proposed algorithm, experiments are performed in real environment.