• Title/Summary/Keyword: visual model

Search Result 2,040, Processing Time 0.037 seconds

A novel visual servoing techniques considering robot dynamics (로봇의 운동특성을 고려한 새로운 시각구동 방법)

  • 이준수;서일홍;김태원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.410-414
    • /
    • 1996
  • A visual servoing algorithm is proposed for a robot with a camera in hand. Specifically, novel image features are suggested by employing a viewing model of perspective projection to estimate relative pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a, commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

Development of a Real-time Vehicle Driving Simulator

  • Kim, Hyun-Ju;Park, Min-Kyu;Lee, Min-Cheoul;You, Wan-Suk
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.51.2-51
    • /
    • 2001
  • A vehicle driving simulator is a virtual reality device which makes a human being feel as if the one drives a vehicle actually. The driving simulator is effectively used for studying interaction of a driver-vehicle and developing the vehicle system of new concepts. The driving simulator consists of a motion platform, a motion controller, a visual and audio system, a vehicle dynamic analysis system, a vehicle operation system and etc. The vehicle dynamic analysis system supervises overall operation of the simulator and also simulates dynamic motion of a multi-body vehicle model in real-time. In this paper, the main procedures to develop the driving simulator are classified by 4 parts. First, a vehicle motion platform and a motion controller, which generates realistic motion using a six degree of freedom Stewart platform driven hydraulically. Secondly, a visual system generates high fidelity visual scenes which are displayed on a screen ...

  • PDF

New Method of Visual Servoing using an Uncalibrated Camera and a Calibrated Robot

  • Morita, Masahiko;Shigeru, Uchikado;Yasuhiro, Osa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.4-41
    • /
    • 2002
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. Here we consider two coordinate systems, the world coordinate system and the camera coordinate one and we use a pinhole camera model as the camera one. First of all, the essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. And these plays an important role in designing visual servoing in the later chapters. Statement of the problem is giver. Provided two a priori...

  • PDF

Visual Servoing Control of a Docking System for an Autonomous Underwater Vehicle (AUV)

  • Lee, Pan-Mook;Jeon, Bong-Hwan;Lee, Chong-Moo;Hong, Young-Hwa;Oh, Jun-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.109.5-109
    • /
    • 2002
  • Autonomous underwater vehicles (AUVs) are unmanned underwater vessels to investigate sea environments, oceanography and deep-sea resources autonomously. Docking systems are required to increase the capability of the AUVs to recharge the batteries and to transmit data in real time in underwater. This paper presents a visual servo control system for an AUV to dock into an underwater station with a camera. To make the visual servo control system , this paper derives an optical flow model of a camera mounted on an AUV, where a CCD camera is installed at the nose center of the AUV to monitor the docking condition. This paper combines the optical flow equation of the camera with the AUV's equation o...

  • PDF

Localization of AUV Using Visual Shape Information of Underwater Structures (수중 구조물 형상의 영상 정보를 이용한 수중로봇 위치인식 기법)

  • Jung, Jongdae;Choi, Suyoung;Choi, Hyun-Taek;Myung, Hyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.29 no.5
    • /
    • pp.392-397
    • /
    • 2015
  • An autonomous underwater vehicle (AUV) can perform flexible operations even in complex underwater environments because of its autonomy. Localization is one of the key components of this autonomous navigation. Because the inertial navigation system of an AUV suffers from drift, observing fixed objects in an inertial reference system can enhance the localization performance. In this paper, we propose a method of AUV localization using visual measurements of underwater structures. A camera measurement model that emulates the camera’s observations of underwater structures is designed in a particle filtering framework. Then, the particle weight is updated based on the extracted visual information of the underwater structures. The proposed method is validated based on the results of experiments performed in a structured basin environment.

Development of a Simulation Program for Virtual Laser Machining (가상 레이저가공 시뮬레이션 프로그램 구축)

  • Lee Ho Yong;Lim Joong Yeon;Shin Kui Sung;Yoon Kyung Koo;Whang Kyung Hyun;Bang Se Yoon
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.7 s.172
    • /
    • pp.54-61
    • /
    • 2005
  • A simulator for virtual laser machining is developed to help understanding and predicting the effects of machining parameters on the final machined results. Main program is based on the model for polymer ablation with short pulse excimer lasers. Version f of the simulator is built using Visual Fortran to make the user work under visual environment such as Windows on PC, where the important machining parameters can be input via dialog box and the calculated results for machined shape, beam fluence, and temperature distribution can be plotted through the 2-D graphics windows. Version II of the simulator is built using HTML, CGI and JAVA languages, allowing the user to control the input parameters and to see the results plot through the internet.

Characteristics of Corrective Saccadic Eye Movement with E.O.G. (E.O.G.를 이용한 Corrective Saccadic 안구운동 특성)

  • 김윤수;박상희
    • Journal of Biomedical Engineering Research
    • /
    • v.2 no.1
    • /
    • pp.21-30
    • /
    • 1981
  • In this study, measuring eye movements with E.O.G. to targets beyond 20$^{\circ}$ from fixation point, results are as follows. (1) When the eyes turn toward targets of more than 20$^{\circ}$ eccentricity, the first saccadic eye movement falls short of the target. The presence of image of the target off the fovea(visual error signal) subsequent to such an undershoot elicits, after short interval, corrective saccadic eye movements(usually one) which place the image of the target on the fovea. (2) There are different programming modes at retina for eye movement to targets within and beyond 20$^{\circ}$ from the fixation point. (3) Saccadic system, preparing the direction and amplitude of eye movement completes the corrective saccadic eye movements. (4) Distribution of latency and intersaccadic interval(I.S.I.)are frequently multi modal, with a seperation between modes of 25[msec]. (5) There are two types of saccadic eye movements for the double-step targets. This fact suggests that the visual information is sampled stochastically. (6) The new model of saccadic system including the dissociation of visual functions dependent on retinal eccentricity is required.

  • PDF

An Algorithm to reconstruct 3D Feet Using Visual Hull (Visual hull을 이용한 3차원 발 복원 알고리즘)

  • Lee, Jae-Kwang;Park, Chang-Joon;Lee, In-Ho
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.279-280
    • /
    • 2006
  • This paper describes a method for reconstructing 3D feet in a real time vision based marker free motion capture system. The proposed method is developed based on the visual hull and model fitting. For a real time computing, a special lookup table is employed in this paper. This method is implemented and tested using three CCD cameras and preliminary results are presented in this paper.

  • PDF

Audio-Visual Localization and Tracking of Sound Sources Using Kalman Filter (칼만 필터를 이용한 시청각 음원 정위 및 추적)

  • Song, Min-Gyu;Kim, Jin-Young;Na, Seung-You
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.4
    • /
    • pp.519-525
    • /
    • 2007
  • With the high interest on robot technology and application, the research on artificial auditory systems for robot is very active. In this paper we discuss sound source localization and tracing based on audio-visual information. For video signals we use face detection based on skin color model. Also, binaural-based DOA is used as audio information. We integrate both informations using Kalman filter. The experimental results show that audio-visual person tracking Is useful, specially in the case that some informations are not observed.

Airport Punctuality Analysis Using Multi-Dimensional Visual Analysis Method (다차원 시각적 분석방법을 이용한 공항 정시운항 분석에 관한 연구)

  • Cho, Jae-Hee;Li, De-Kui
    • Journal of Information Technology Services
    • /
    • v.10 no.1
    • /
    • pp.167-176
    • /
    • 2011
  • Punctuality is one of the key performance indicators of the airline industry and an important service differentiator especially for valuable customers. In addition, improvement on time performance can help achieve cost saving, i.e. the cost of airline report, which could range from 0.6% to 2.9% of their operating revenues. Therefore efficient management of punctuality is crucial for the industry. This study overcomes the limitations of existing analyses on punctuality and develops a multi-dimensional model for airport punctuality analysis. In addition to analysis of airport punctuality, visual analysis is also proposed in the study. Data was collected from actual flight data of Incheon International Airport. Using the new visual analysis method, the study discovered the pattern of the punctuality that has never studied before.