• Title/Summary/Keyword: Vision Model

Search Result 1,319, Processing Time 0.025 seconds

An Experimental Study on the Optimal Arrangement of Cameras Used for the Robot's Vision Control Scheme (로봇 비젼 제어기법에 사용된 카메라의 최적 배치에 대한 실험적 연구)

  • Min, Kwan-Ung;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.19 no.1
    • /
    • pp.15-25
    • /
    • 2010
  • The objective of this study is to investigate the optimal arrangement of cameras used for the robot's vision control scheme. The used robot's vision control scheme involves two estimation models, which are the parameter estimation and robot's joint angle estimation models. In order to perform this study, robot's working region is divided into three work spaces such as left, central and right spaces. Also, cameras are positioned on circular arcs with radius of 1.5m, 2.0m and 2.5m. Seven cameras are placed on each circular arc. For the experiment, nine cases of camera arrangement are selected in each robot's work space, and each case uses three cameras. Six parameters are estimated for each camera using the developed parameter estimation model in order to show the suitability of the vision system model in nine cases of each robot's work space. Finally, the robot's joint angles are estimated using the joint angle estimation model according to the arrangement of cameras for robot's point-position control. Thus, the effect of camera arrangement used for the robot's vision control scheme is shown for robot's point-position control experimentally.

Development of Vision System Model for Manipulator's Assemble task (매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

Integrated Navigation Design Using a Gimbaled Vision/LiDAR System with an Approximate Ground Description Model

  • Yun, Sukchang;Lee, Young Jae;Kim, Chang Joo;Sung, Sangkyung
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.14 no.4
    • /
    • pp.369-378
    • /
    • 2013
  • This paper presents a vision/LiDAR integrated navigation system that provides accurate relative navigation performance on a general ground surface, in GNSS-denied environments. The considered ground surface during flight is approximated as a piecewise continuous model, with flat and slope surface profiles. In its implementation, the presented system consists of a strapdown IMU, and an aided sensor block, consisting of a vision sensor and a LiDAR on a stabilized gimbal platform. Thus, two-dimensional optical flow vectors from the vision sensor, and range information from LiDAR to ground are used to overcome the performance limit of the tactical grade inertial navigation solution without GNSS signal. In filter realization, the INS error model is employed, with measurement vectors containing two-dimensional velocity errors, and one differenced altitude in the navigation frame. In computing the altitude difference, the ground slope angle is estimated in a novel way, through two bisectional LiDAR signals, with a practical assumption representing a general ground profile. Finally, the overall integrated system is implemented, based on the extended Kalman filter framework, and the performance is demonstrated through a simulation study, with an aircraft flight trajectory scenario.

Object Recognition Using Planar Surface Segmentation and Stereo Vision

  • Kim, Do-Wan;Kim, Sung-Il;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1920-1925
    • /
    • 2004
  • This paper describes a new method for 3D object recognition which used surface segment-based stereo vision. The position and orientation of an objects is identified accurately enabling a robot to pick up, even though the objects are multiple and partially occluded. The stereo vision is used to get the 3D information as 3D sensing, and CAD model with its post processing is used for building models. Matching is initially performed using the model and object features, and calculate roughly the object's position and orientation. Though the fine adjustment step, the accuracy of the position and orientation are improved.

  • PDF

Vision-based technique for bolt-loosening detection in wind turbine tower

  • Park, Jae-Hyung;Huynh, Thanh-Canh;Choi, Sang-Hoon;Kim, Jeong-Tae
    • Wind and Structures
    • /
    • v.21 no.6
    • /
    • pp.709-726
    • /
    • 2015
  • In this study, a novel vision-based bolt-loosening monitoring technique is proposed for bolted joints connecting tubular steel segments of the wind turbine tower (WTT) structure. Firstly, a bolt-loosening detection algorithm based on image processing techniques is developed. The algorithm consists of five steps: image acquisition, segmentation of each nut, line detection of each nut, nut angle estimation, and bolt-loosening detection. Secondly, experimental tests are conducted on a lab-scale bolted joint model under various bolt-loosening scenarios. The bolted joint model, which is consisted of a ring flange and 32 sets of bolt and nut, is used for simulating the real bolted joint connecting steel tower segments in the WTT. Finally, the feasibility of the proposed vision-based technique is evaluated by bolt-loosening monitoring in the lab-scale bolted joint model.

Fine-tuning Neural Network for Improving Video Classification Performance Using Vision Transformer (Vision Transformer를 활용한 비디오 분류 성능 향상을 위한 Fine-tuning 신경망)

  • Kwang-Yeob Lee;Ji-Won Lee;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.313-318
    • /
    • 2023
  • This paper proposes a neural network applying fine-tuning as a way to improve the performance of Video Classification based on Vision Transformer. Recently, the need for real-time video image analysis based on deep learning has emerged. Due to the characteristics of the existing CNN model used in Image Classification, it is difficult to analyze the association of consecutive frames. We want to find and solve the optimal model by comparing and analyzing the Vision Transformer and Non-local neural network models with the Attention mechanism. In addition, we propose an optimal fine-tuning neural network model by applying various methods of fine-tuning as a transfer learning method. The experiment trained the model with the UCF101 dataset and then verified the performance of the model by applying a transfer learning method to the UTA-RLDD dataset.

A Study on Developing a High-Resolution Digital Elevation Model (DEM) of a Tunnel Face (터널 막장면 고해상도 DEM(Digital Elevation Model) 생성에 관한 연구)

  • Kim, Kwang-Yeom;Kim, Chang-Yong;Baek, Seung-Han;Hong, Sung-Wan;Lee, Seung-Do
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2006.03a
    • /
    • pp.931-938
    • /
    • 2006
  • Using high resolution stereoscopic imaging system three digital elevation model of tunnel face is acquired. The images oriented within a given tunnel coordinate system are brought into a stereoscopic vision system enabling three dimensional inspection and evaluation. The possibilities for the prediction ahead and outside of tunnel face have been improved by the digital vision system with 3D model. Interpolated image structures of rock mass between subsequent stereo images will enable to model the rock mass surrounding the opening within a short time at site. The models shall be used as input to numerical simulations on site, comparison of expected and encountered geological conditions, and for the interpretation of geotechnical monitoring results.

  • PDF

A Hierarchical Motion Controller for Soccer Robots with Stand-alone Vision System (독립 비젼 시스템 기반의 축구로봇을 위한 계층적 행동 제어기)

  • Lee, Dong-Il;Kim, Hyung-Jong;Kim, Sang-Jun;Jang, Jae-Wan;Choi, Jung-Won;Lee, Suk-Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.9
    • /
    • pp.133-141
    • /
    • 2002
  • In this paper, we propose a hierarchical motion controller with stand-alone vision system to enhance the flexibility of the robot soccer system. In addition, we simplified the model of dynamic environments of the robot using petri-net and simple state diagram. Based on the proposed model, we designed the robot soccer system with velocity and position controller that includes 4-level hierarchically structured controller. Some experimental results using the stand-alone vision system from host system show improvement of the controller performance by reducing processing time of vision algorithm.

Real-Time Control of a SCARA Robot by Visual Servoing with the Stereo Vision

  • S. H. Han;Lee, M. H.;K. Son;Lee, M. C.;Park, J. W.;Lee, J. M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.238-243
    • /
    • 1998
  • This paper presents a new approach to visual servoing with the stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using a binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location but also at the other locations. The suggested technique can guide a robot manipulator to the desired location without giving such priori knowledge as the relative distance to the desired location or the model of an object even if the initial positioning error is large. This paper describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by the simulation and experimental results and compared with the case of conventional method fur a SCARA robot.

  • PDF

Computer Vision Platform Design with MEAN Stack Basis (MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계)

  • Hong, Seonhack;Cho, Kyungsoon;Yun, Jinseob
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.