• 제목/요약/키워드: Robot fish control

검색결과 28건 처리시간 0.023초

햅틱기술을 이용한 피시봇 개발 (Development of the Fishbot Using Haptic Technology)

  • 이영대;강정진;문찬우
    • 한국인터넷방송통신학회논문지
    • /
    • 제10권4호
    • /
    • pp.77-82
    • /
    • 2010
  • 본 연구에서는 가상낚시시스템(VFS, Virtual Fishing System)을 위해 제작된 햅틱장치인 FishBot을 개발한다. FishBot는 XY테이블에 휠축을 장착한 3자유도 로봇으로 구성된다. 물고기의 동작을 모사하기 위해 XY 축 제어는 토크값을 가변적으로 제한하는 모션 위치 제어 모드로 하고 휠 축은 힘제어 모드로 하였다. 낚싯대는 실제 낚싯대에 LED를 장착하고 웹 카메라를 튜닝하여 낚싯대의 위치를 인식할 수 있도록 하였다. 결과적으로 제작된 Fishbot은 물고기와 같이 위치와 속도를 변화하고 DAC를 통해 낚는 힘을 제어할 수 있으며 낚싯대 끝단의 위치를 관측하여 가상현실시스템(Virtual Reality System)상에서 연동할 수 있게 하였다.

전자기 구동 유영 마이크로로봇 (Swimming Microrobot Actuated by External Magnetic Field)

  • 변동학;김준영;백승만;최현철;박종오;박석호
    • 대한기계학회논문집A
    • /
    • 제33권11호
    • /
    • pp.1300-1305
    • /
    • 2009
  • The various electromagnetic based actuation(EMA) methods have been proposed for actuating microrobot. The advantage of EMA is that it can provide wireless driving to microrobot. In this reason a lot of researchers have been focusing on the EMA driven microrobot. This paper proposed a swimming microrobot driven by external alternating magnet field which is generated by two pairs of Helmholtz coils. The microrobot has a fish-like shape and consists of a buoyant robot body, a permanent magnet, and a fin. The fin is directly linked to the permanent magnet and the magnet is swung by the alternating magnet field, which makes the propulsion and steering power of the robot. In this paper, firstly, we designed the locomotive mechanism of the microrobot boy EMA. Secondly, we set up the control system. Finally, we demonstrated the swimming robot and evaluated the performance of the microrobot by the experiments.

신경회로망을 이용한 광각렌즈의 왜곡보정 (Neural network based distortion correction of wide angle lens)

  • 정규원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.299-301
    • /
    • 1996
  • Since a standard lens has small sight angle, a fish-eye lens can be used in order to obtain wide sight angle for the robot vision system. In spite of the advantage, the image through the lens has variable resolution; the central information of the lens is of high resolution, but the peripheral information is of low resolution. Owing to this difference of resolution, the variable resolution image should be transformed to a uniform resolution image in order to determine the positions of the objects in the image. In this work, the correction method for the distorted image is presented and the performance is analyzed. Furthermore, the camera with a fish eye lens can be used to determine the real world coordinates. The performance is shown through experiments.

  • PDF

An Observation System of Hemisphere Space with Fish eye Image and Head Motion Detector

  • Sudo, Yoshie;Hashimoto, Hiroshi;Ishii, Chiharu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.663-668
    • /
    • 2003
  • This paper presents a new observation system which is useful to observe the scene of the remote controlled robot vision. This system is composed of a motionless camera and head motion detector with a motion sensor. The motionless camera has a fish eye lens and is for observing a hemisphere space. The head motion detector has a motion sensor is for defining an arbitrary subspace of the hemisphere space from fish eye lens. Thus processing the angular information from the motion sensor appropriately, the direction of face is estimated. However, since the fisheye image is distorted, it is unclear image. The partial domain of a fish eye image is selected by head motion, and this is converted to perspective image. However, since this conversion enlarges the original image spatially and is based on discrete data, crevice is generated in the converted image. To solve this problem, interpolation based on an intensity of the image is performed for the crevice in the converted image (space problem). This paper provides the experimental results of the proposed observation system with the head motion detector and perspective image conversion using the proposed conversion and interpolation methods, and the adequacy and improving point of the proposed techniques are discussed.

  • PDF

어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어 (Multi-robot Formation based on Object Tracking Method using Fisheye Images)

  • 최윤원;김종욱;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

유속 및 각도 측정을 위한 인공 옆줄 센서 개발 (Development of Artificial Lateral Line Sensor for Flow Velocity and Angle Measurements)

  • 김진현
    • 센서학회지
    • /
    • 제30권1호
    • /
    • pp.30-35
    • /
    • 2021
  • To operate an underwater robot in an environment with fluid flow, it is necessary to recognize the speed and direction of the fluid and implement motion control based on these characteristics. Fish have a lateral line that performs this function. In this study, to develop an artificial lateral line sensor that mimics a fish, we developed a method to measure the flow speed and the incident angle of the fluid using a pressure sensor. Several experiments were conducted, and based on the results, the tendency according to the change in the flow speed and the incident angle of the fluid was confirmed. It is believed that additional research can aid in the development of an artificial lateral line sensor.

Unity Engine 기반 수중 로봇 3차원 포지셔닝 프로그램 구현 (Unity Engine-based Underwater Robot 3D Positioning Program Implementation)

  • 최철호;김종훈;김준영;박준;박성욱;정세훈;심춘보
    • 스마트미디어저널
    • /
    • 제11권9호
    • /
    • pp.64-74
    • /
    • 2022
  • 해양자원을 활용하기 위한 수중 로봇과 관련된 연구가 다수 진행되고 있다. 그러나 일반 드론과 다르게 수중 로봇은 매개체가 공기가 아닌 물이기 때문에 위치 파악이 쉽지 않은 문제점이 존재한다. 수중 위치를 확인하기 위한 기존 연구인 수중 로봇의 모니터링 및 포지셔닝 프로그램은 대규모 공간에서 활용하기 위한 목적을 가지고 있기 때문에 소규모의 공간에서 위치 파악 및 모니터링에 어려움을 가지고 있다. 이에 본 논문에서는 소규모 공간에서 지속적인 모니터링과 명령 전달을 위한 3차원 포지셔닝 프로그램을 제안한다. 제안된 프로그램은 수중 로봇의 위치에 깊이를 확인할 수 있도록 다차원 포지셔닝 모니터링 기능과 3차원 화면을 통해 이동 경로 제어할 수 있는 기능으로 구성된다. 성능평가를 통해 수중 로봇이 수조 모습과 동일하게 출력되어 3차원 화면으로 다양한 각도에서 모니터링 확인이 가능하였으며, 설정 경로와 실제 위치의 차이가 평균 6.44m 이내로써 상정 범위 내의 오차를 확인하였다.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.