• Title/Summary/Keyword: 3D autonomous system

Search Result 153, Processing Time 0.025 seconds

Multisensor System Integrating Optical Tactile and F/T Sensors for Determination of Type and Position of 3D Contact Surface (3차원 접촉면의 인식 및 위치의 결정의 위한 광촉각센서와 역각센서의 다중센서시스템)

  • 한헌수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.2
    • /
    • pp.10-19
    • /
    • 1996
  • This paper presents a finger-shaped multisensor system which can measure the tyep and position of a target surface by contactl. The multi-sensor system consists of a sphere-shpaed optical tactile sensor located at the finger tip and a force/torque sensor located at the joint of a finger. The optial tactile sensor determines the type and position of the target surface using the shape and position of the CCD image of the touching area generated by a contact between the sensor and the taget surface. The force/torque sensor also determines the position and surface normal vector by applying the distributionof forces and torques t the contact point to the equations of finger shape. The measurements on the position and surface normal vector at a contact point obtined by two individual sensors are fused using a statistical method. The integrated sensor system has 0.8mm error in position measurement and 1.31$^{\circ}$ error in normal vector measurement. The developed sensor system has many applications, such as autonomous compliance control, automatic grasping and recognition, etc.

  • PDF

A Development of Simulation System for 3D Path Planning of UUV (무인잠수정의 3차원 경로계획을 위한 시뮬레이션 시스템 개발)

  • Shin, Seoung-Chul;Seon, Hwi-Joon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.701-704
    • /
    • 2010
  • In studying an autonomous navigation technique of UUV(Unmaned Underwater Vehicle), one of the many fundamental techniques is to plan a 3D path to complete the mission via realtime information received by sonar showing landscapes and obstacles. The simulation system is necessary to verify the algorithm in researching and developing 3D path planning of UUV. It is because 3D path planning of UUV should consider guide control, the dynamics, ocean environment, and search sonar models on the basis of obstacle avoidance technique. The simulation system developed in this paper visualizes the UUV's movement of avoiding obstacles, arriving at the goal position via waypoints by using C++ and OpenGL. Plus, it enables the user to setup the various underwater environment and obstacles by a user interface. It also provides a generalization that can verify path planning algorithm of UUV studied in any developing environment.

  • PDF

Obstacle Detection and Safe Landing Site Selection for Delivery Drones at Delivery Destinations without Prior Information (사전 정보가 없는 배송지에서 장애물 탐지 및 배송 드론의 안전 착륙 지점 선정 기법)

  • Min Chol Seo;Sang Ik Han
    • Journal of Auto-vehicle Safety Association
    • /
    • v.16 no.2
    • /
    • pp.20-26
    • /
    • 2024
  • The delivery using drones has been attracting attention because it can innovatively reduce the delivery time from the time of order to completion of delivery compared to the current delivery system, and there have been pilot projects conducted for safe drone delivery. However, the current drone delivery system has the disadvantage of limiting the operational efficiency offered by fully autonomous delivery drones in that drones mainly deliver goods to pre-set landing sites or delivery bases, and the final delivery is still made by humans. In this paper, to overcome these limitations, we propose obstacle detection and landing site selection algorithm based on a vision sensor that enables safe drone landing at the delivery location of the product orderer, and experimentally prove the possibility of station-to-door delivery. The proposed algorithm forms a 3D map of point cloud based on simultaneous localization and mapping (SLAM) technology and presents a grid segmentation technique, allowing drones to stably find a landing site even in places without prior information. We aims to verify the performance of the proposed algorithm through streaming data received from the drone.

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

Design and Implementation of Virtual Aquarium

  • Bak, Seon-Hui;Lee, Heeman
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.43-49
    • /
    • 2016
  • This paper presents the design and implementation of virtual aquarium by generating 3D models of fishes that are colored by viewers in an aim to create interaction among viewers and aquarium. The virtual aquarium system is composed of multiple texture extraction modules, a single interface module and a single display module. The texture extraction module recognize the QR code on the canvas to get information of the predefined mapping table and then extract the texture data for the corresponding 3D model. The scanned image is segmented and warp transformed onto the texture image by using the mapping information. The extracted texture is transferred to the interface module to save on the server computer and the interface module sends the fish code and texture information to the display module. The display module generates a fish on the virtual aquarium by using predefined 3D model with the transmitted texture. The fishes on the virtual aquarium have three different swimming methods: self-swimming, autonomous swimming, and leader-following swimming. The three different swimming methods are discussed in this paper. The future study will be the implementation of virtual aquarium based on storytelling to further increase interactions with the viewer.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.

Autonomous Surveillance-tracking System for Workers Monitoring (작업자 모니터링을 위한 자동 감시추적 시스템)

  • Ko, Jung-Hwan;Lee, Jung-Suk;An, Young-Hwan
    • 전자공학회논문지 IE
    • /
    • v.47 no.2
    • /
    • pp.38-46
    • /
    • 2010
  • In this paper, an autonomous surveillance-tracking system for Workers monitoring basing on the stereo vision scheme is proposed. That is, analysing the characteristics of the cross-axis camera system through some experiments, a optimized stereo vision system is constructed and using this system an intelligent worker surveillance-tracking system is implemented, in which a target worker moving through the environments can be detected and tracked, and its resultant stereo location coordinates and moving trajectory in the world space also can be extracted. From some experiments on moving target surveillance-tracking, it is analyzed that the target's center location after being tracked is kept to be very low error ratio of 1.82%, 1.11% on average in the horizontal and vertical directions, respectively. And, the error ratio between the calculation and measurement values of the 3D location coordinates of the target person is found to be very low value of 2.5% for the test scenario on average. Accordingly, in this paper, a possibility of practical implementation of the intelligent stereo surveillance system for real-time tracking of a target worker moving through the environments and robust detection of the target's 3D location coordinates and moving trajectory in the real world is finally suggested.

The Phase Space Analysis of 3D Vector Fields (3차원 벡터 필드의 위상 공간 분석)

  • Jung, Il-Hong;Kim, Yong Soo
    • Journal of Digital Contents Society
    • /
    • v.16 no.6
    • /
    • pp.909-916
    • /
    • 2015
  • This paper presents a method to display the 3D vector fields by analyzing phase space. This method is based on the connections between ordinary differential equations and the topology of vector fields. The phase space analysis should be geometric interpolation of an autonomous system of equation in the form of the phase space. Every solution of it system of equations corresponds not to a curve in a space, but the motion of a point along the curve. This analysis is the basis of this paper. This new method is required to decompose the hexahedral cell into five or six tetrahedral cells for 3D vector fields. The critical points can be easily found by solving a simple linear system for each tetrahedron. The tangent curves can be integrated by finding the intersection points of an integral curve traced out by the general solution of each tetrahedron and plane containing a face of the tetrahedron.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).