• Title/Summary/Keyword: marker vision

Search Result 72, Processing Time 0.029 seconds

Tele-operating System of Field Robot for Cultivation Management - Vision based Tele-operating System of Robotic Smart Farming for Fruit Harvesting and Cultivation Management

  • Ryuh, Youngsun;Noh, Kwang Mo;Park, Joon Gul
    • Journal of Biosystems Engineering
    • /
    • v.39 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Purposes: This study was to validate the Robotic Smart Work System that can provides better working conditions and high productivity in unstructured environments like bio-industry, based on a tele-operation system for fruit harvesting with low cost 3-D positioning system on the laboratory level. Methods: For the Robotic Smart Work System for fruit harvesting and cultivation management in agriculture, a vision based tele-operating system and 3-D position information are key elements. This study proposed Robotic Smart Farming, an agricultural version of Robotic Smart Work System, and validated a 3-D position information system with a low cost omni camera and a laser marker system in the lab environment in order to get a vision based tele-operating system and 3-D position information. Results: The tasks like harvesting of the fixed target and cultivation management were accomplished even if there was a short time delay (30 ms ~ 100 ms). Although automatic conveyor works requiring accurate timing and positioning yield high productivity, the tele-operation with user's intuition will be more efficient in unstructured environments which require target selection and judgment. Conclusions: This system increased work efficiency and stability by considering ancillary intelligence as well as user's experience and knowhow. In addition, senior and female workers will operate the system easily because it can reduce labor and minimized user fatigue.

Design and Fabrication of Multi-rotor system for Vision based Autonomous Landing (영상 기반 자동 착륙용 멀티로터 시스템 설계 및 개발)

  • Kim, Gyou-Beom;Song, Seung-Hwa;Yoon, Kwang-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.6
    • /
    • pp.141-146
    • /
    • 2012
  • This paper introduces development of multi-rotor system and vision based autonomous landing system. Multi-rotor platform is modeled by rigid body motion with Newton Euler concept. Also Multi-rotor platform is simulated and tuned by LQR control algorithm. Vision based Autonomous Landing system uses a single camera that is mounted Multi-rotor system. Augmented reality algorithm is used as marker detection algorithm and autonomous landing code is test with GCS for the precision landing.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

Posture Stabilization Control for Mobile Robot using Marker Recognition and Hybrid Visual Servoing (마커인식과 혼합 비주얼 서보잉 기법을 통한 이동로봇의 자세 안정화 제어)

  • Lee, Sung-Goo;Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.8
    • /
    • pp.1577-1585
    • /
    • 2011
  • This paper proposes a posture stabilization control algorithm for a wheeled mobile robot using hybrid visual servo control method with a position based and an image based visual servoing (PBVS and IBVS). To overcome chattering phenomena which were shown in the previous researches using a simple switching function based on a threshold, the proposed hybrid visual servo control law introduces the fusion function based on a blending function. Then, the chattering problem and rapid motion of the mobile robot can be eliminated. Also, we consider the nonlinearity of the wheeled mobile robot unlike the previous visual servo control laws using linear control methods to improve the performances of the visual servo control law. The proposed posture stabilization control law using hybrid visual servoing is verified by a theoretical analysis and simulation and experimental results.

Small Marker Detection with Attention Model in Robotic Applications (로봇시스템에서 작은 마커 인식을 하기 위한 사물 감지 어텐션 모델)

  • Kim, Minjae;Moon, Hyungpil
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.425-430
    • /
    • 2022
  • As robots are considered one of the mainstream digital transformations, robots with machine vision becomes a main area of study providing the ability to check what robots watch and make decisions based on it. However, it is difficult to find a small object in the image mainly due to the flaw of the most of visual recognition networks. Because visual recognition networks are mostly convolution neural network which usually consider local features. So, we make a model considering not only local feature, but also global feature. In this paper, we propose a detection method of a small marker on the object using deep learning and an algorithm that considers global features by combining Transformer's self-attention technique with a convolutional neural network. We suggest a self-attention model with new definition of Query, Key and Value for model to learn global feature and simplified equation by getting rid of position vector and classification token which cause the model to be heavy and slow. Finally, we show that our model achieves higher mAP than state of the art model YOLOr.

Diffractive Alignment of Dual Display Panels

  • Shin-Woong Park;Junghwan Park;Hwi Kim
    • Current Optics and Photonics
    • /
    • v.8 no.1
    • /
    • pp.72-79
    • /
    • 2024
  • Recent flat-panel displays have become increasingly complicated to facilitate multiple display functions. In particular, the form of multilayered architectures for next-generation displays makes precise three-dimensional alignment of multiple panels a challenge. In this paper, a diffractive optical alignment marker is proposed to address the problem of three-dimensional alignment of distant dual panels beyond the depth-of-focus of a vision camera. The diffractive marker is effective to analyze the positional correlation of distant dual panels. The possibility of diffractive alignment in multilayer display fabrication is testified with numerical simulation and a proof-of-concept experiment.

Vision-based Human-Robot Motion Transfer in Tangible Meeting Space (실감만남 공간에서의 비전 센서 기반의 사람-로봇간 운동 정보 전달에 관한 연구)

  • Choi, Yu-Kyung;Ra, Syun-Kwon;Kim, Soo-Whan;Kim, Chang-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.143-151
    • /
    • 2007
  • This paper deals with a tangible interface system that introduces robot as remote avatar. It is focused on a new method which makes a robot imitate human arm motions captured from a remote space. Our method is functionally divided into two parts: capturing human motion and adapting it to robot. In the capturing part, we especially propose a modified potential function of metaballs for the real-time performance and high accuracy. In the adapting part, we suggest a geometric scaling method for solving the structural difference between a human and a robot. With our method, we have implemented a tangible interface and showed its speed and accuracy test.

  • PDF

Development of Vision-based Lateral Control System for an Autonomous Navigation Vehicle (자율주행차량을 위한 비젼 기반의 횡방향 제어 시스템 개발)

  • Rho Kwanghyun;Steux Bruno
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.19-25
    • /
    • 2005
  • This paper presents a lateral control system for the autonomous navigation vehicle that was developed and tested by Robotics Centre of Ecole des Mines do Paris in France. A robust lane detection algorithm was developed for detecting different types of lane marker in the images taken by a CCD camera mounted on the vehicle. $^{RT}Maps$ that is a software framework far developing vision and data fusion applications, especially in a car was used for implementing lane detection and lateral control. The lateral control has been tested on the urban road in Paris and the demonstration has been shown to the public during IEEE Intelligent Vehicle Symposium 2002. Over 100 people experienced the automatic lateral control. The demo vehicle could run at a speed of 130km1h in the straight road and 50km/h in high curvature road stably.

Restoration of Realtime Three-Dimension Positions Using PSD Sensor (PSD센서를 이용한 실시간 3차원 위치의 복원)

  • Choi, Hun-Il;Jo, Yong-Jun;Ryu, Young-Kee
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.507-510
    • /
    • 2003
  • In this paper, optical sensor system using PSD(Position Sensitive Detection) is proposed to obtain the three dimensional position of moving markers attached to human body. To find the coordinates of an moving marrer with stereo vision system, two different sight rays of an moving marker are required. Usually, those are acquired with two optical sensors synchronized at the same time. PSD sensor is used to measure the position of an incidence light in real-time. To get the three-dimension position of light source on moving markers, a conventional camera calibration method are used. In this research, we realized a low cost motion capture system. The proposed system shows high three-dimension measurement accuracy and fast sampling frequency.

  • PDF

Improved Motion-Recognizing Remote Controller for Realistic Contents (실감형 컨텐츠를 위한 향상된 동작 인식 리모트 컨트롤러)

  • Park, Gun-Hyuk;Kim, Sang-Ki;Yim, Sung-Hoon;Han, Gab-Jong;Choi, Seung-Moon;Choi, Seung-Jin;Eoh, Hong-Jun;Cho, Sun-Young
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.396-401
    • /
    • 2009
  • This paper describes the improvements made on hardware and software of the remote controller for realistic contents. The controller can provide vibrotactile feedback which uses both of a voice-coil actuator and a vibration motor. A vision tracking system for the 3D position of the controller is optimized with respect to the marker size and the camera parameters. We also present the improvements of motion recognition due to the effective motion segmentation and the fusion of vision and acceleration data. We apply the developed controller to realistic contents and validate its usability.

  • PDF