• Title/Summary/Keyword: Vision-based Manipulation

Search Result 28, Processing Time 0.028 seconds

Microassembly System for the assembly of photonic components (광 부품 조립을 위한 마이크로 조립 시스템)

  • 강현재;김상민;남궁영우;김병규
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.241-245
    • /
    • 2003
  • In this paper, a microassembly system based on hybrid manipulation schemes is proposed and applied to the assembly of a photonic component. In order to achieve both high precision and dexterity in microassembly, we propose a hybrid microassembly system with sensory feedbacks of vision and force. This system consists of the distributed 6-DOF micromanipulation units, the stereo microscope, and haptic interface for the force feedback-based microassembly. A hybrid assembly method, which combines the vision-based microassembly and the scaled teleoperated microassembly with force feedback, is proposed. The feasibility of the proposed method is investigated via experimental studies for assembling micro opto-electrical components. Experimental results show that the hybrid microassembly system is feasible for applications to the assembly of photonic components in the commercial market with better flexibility and efficiency.

  • PDF

A Vision Based Bio-Cell Recognition for Biomanipulation with Multiple Views

  • Jang, Min-Soo;Lee, Seok-Joo;Lee, Ho-Dong;Kim, Byung-Kyu;Park, Jong-Oh;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2435-2440
    • /
    • 2003
  • Manipulation of the nano/micro scale object has been a key technology in biology as the sizes of DNA, chromosome, nucleus, cell and embryo are within such order. For instance, for embryo cell manipulation, the cell injection is performed manually. The operator often spends over a year to carry out a cell manipulation project. Since the typical success rate of such operation is extremely low, automation of such biological cell manipulation has been asked. As the operator spends most of his time in finding the position of cell in the Petri dish and in injecting bio-material to the cell from the best orientation. In this paper, we propose a new strategy and a vision system, by which one can find, recognize and track nucleus, polar body, and zona pellucida of the embryo cell for automatic biomanipulation. The deformable template matching algorithm has been used in recognizing the nucleus and polar body of each cell. Result suggests that it outperforms the conventional methods.

  • PDF

A Dexterous Teleoperation System for Micro Parts Handling (마이크로 조립시스템의 원격제어)

  • Kim, Deok-Ho;Kim, Kyung-Hwan;Kim, Keun-Young;Park, Jong-Oh
    • Proceedings of the KSME Conference
    • /
    • 2001.06b
    • /
    • pp.158-163
    • /
    • 2001
  • Operators suffer much difficulty in manipulating micro/nano-sized objects without the assistance of human interfaces, due to the scaling effects in micro/nano world. This paper presents a micro manipulation system based on the teleoperation techniques which enables the operators to manipulate the objects with ease by transferring both human motion and manipulation skill to a micromanipulator. An experimental setup consisting of a micromanipulator operated under stereo-microscope with the help of intelligent user interface provides a tool that can be used to visualize and manipulate micro-sized 3D objects in a controlled manner. The key features of a micro manipulation system and control strategies using teleoperation techniques for handling micro objects are presented. Experimental results demonstrate the feasibility of this system in precisely controlling trapping and manipulation of micro objects based on teleoperation techniques.

  • PDF

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

Vision-Based Robot Manipulator for Grasping Objects (물체 잡기를 위한 비전 기반의 로봇 메뉴플레이터)

  • Baek, Young-Min;Ahn, Ho-Seok;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.331-333
    • /
    • 2007
  • Robot manipulator is one of the important features in service robot area. Until now, there has been a lot of research on robot" manipulator that can imitate the functions of a human being by recognizing and grasping objects. In this paper, we present a robot arm based on the object recognition vision system. We have implemented closed-loop control that use the feedback from visual information, and used a sonar sensor to improve the accuracy. We have placed the web-camera on the top of the hand to recognize objects. We also present some vision-based manipulation issues and our system features.

  • PDF

Facial Manipulation Detection with Transformer-based Discriminative Features Learning Vision (트랜스포머 기반 판별 특징 학습 비전을 통한 얼굴 조작 감지)

  • Van-Nhan Tran;Minsu Kim;Philjoo Choi;Suk-Hwan Lee;Hoanh-Su Le;Ki-Ryong Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.540-542
    • /
    • 2023
  • Due to the serious issues posed by facial manipulation technologies, many researchers are becoming increasingly interested in the identification of face forgeries. The majority of existing face forgery detection methods leverage powerful data adaptation ability of neural network to derive distinguishing traits. These deep learning-based detection methods frequently treat the detection of fake faces as a binary classification problem and employ softmax loss to track CNN network training. However, acquired traits observed by softmax loss are insufficient for discriminating. To get over these limitations, in this study, we introduce a novel discriminative feature learning based on Vision Transformer architecture. Additionally, a separation-center loss is created to simply compress intra-class variation of original faces while enhancing inter-class differences in the embedding space.

A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance (불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구)

  • Jang, W.-S.;Kim, K.-S.;Shin, K.-S.;Joo, C.;;Yoon, H.-K.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

Force Arrow: An Efficient Pseudo-Weight Perception Method

  • Lee, Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.7
    • /
    • pp.49-56
    • /
    • 2018
  • Virtual object weight perception is an important topic, as it heightens the believability of object manipulation in immersive virtual environments. Although weight perception can be achieved using haptic interfaces, their technical complexity makes them difficult to apply in immersive virtual environments. In this study, we present a visual pseudo-haptic feedback system that simulates and depicts the weights of virtual objects, the effect of which is weight perception. The proposed method recognizes grasping and manipulating hand motions using computer vision-based tracking methods, visualizing a Force Arrow to indicate the current lifting forces and its difference from the standard lifting force. With the proposed Force Arrow method, a user can more accurately perceive the logical and unidirectional weight and therefore control the force used to lift a virtual object. In this paper, we investigate the potential of the proposed method in discriminating between different weights of virtual objects.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Development and Validation of a Vision-Based Needling Training System for Acupuncture on a Phantom Model

  • Trong Hieu Luu;Hoang-Long Cao;Duy Duc Pham;Le Trung Chanh Tran;Tom Verstraten
    • Journal of Acupuncture Research
    • /
    • v.40 no.1
    • /
    • pp.44-52
    • /
    • 2023
  • Background: Previous studies have investigated technology-aided needling training systems for acupuncture on phantom models using various measurement techniques. In this study, we developed and validated a vision-based needling training system (noncontact measurement) and compared its training effectiveness with that of the traditional training method. Methods: Needle displacements during manipulation were analyzed using OpenCV to derive three parameters, i.e., needle insertion speed, needle insertion angle (needle tip direction), and needle insertion length. The system was validated in a laboratory setting and a needling training course. The performances of the novices (students) before and after training were compared with the experts. The technology-aided training method was also compared with the traditional training method. Results: Before the training, a significant difference in needle insertion speed was found between experts and novices. After the training, the novices approached the speed of the experts. Both training methods could improve the insertion speed of the novices after 10 training sessions. However, the technology-aided training group already showed improvement after five training sessions. Students and teachers showed positive attitudes toward the system. Conclusion: The results suggest that the technology-aided method using computer vision has similar training effectiveness to the traditional one and can potentially be used to speed up needling training.