• Title/Summary/Keyword: 3D robot vision

Search Result 138, Processing Time 0.044 seconds

Robotic Surgery in Cancer Care: Opportunities and Challenges

  • Mohammadzadeh, Niloofar;Safdari, Reza
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.3
    • /
    • pp.1081-1083
    • /
    • 2014
  • Malignancy-associated mortality, decreased productivity, and spiritual, social and physical burden in cancer patients and their families impose heavy costs on communities. Therefore cancer prevention, early detection, rapid diagnosis and timely treatment are very important. Use of modern methods based on information technology in cancer can improve patient survival and increase patient and health care provider satisfaction. Robot technology is used in different areas of health care and applications in surgery have emerged affecting the cancer treatment domain. Computerized and robotic devices can offer enhanced dexterity by tremor abolition, motion scaling, high quality 3D vision for surgeons and decreased blood loss, significant reduction in narcotic use, and reduced hospital stay for patients. However, there are many challenges like lack of surgical community support, large size, high costs and absence of tactile and haptic feedback. A comprehensive view to identify all factors in different aspects such as technical, legal and ethical items that prevent robotic surgery adoption is thus very necessary. Also evidence must be presented to surgeons to achieve appropriate support from physicians. The aim of this review article is to survey applications, opportunities and barriers to this advanced technology in patients and surgeons as an approach to improve cancer care.

Development of Automatic Recognition and Spray Control System for Reducing the Amount of Marine Coating paint (선박용 피도물 도료 사용량 절감을 위한 인식 및 스프레이 자동제어시스템 개발)

  • Jung, Young-Deuk
    • Journal of the Korea Safety Management & Science
    • /
    • v.21 no.3
    • /
    • pp.23-27
    • /
    • 2019
  • The first aim of the study is to improve the productivity by uniformizing the thickness of coating and reducing quality-inspection time. The second aim is to cut down on the raw materials for coating by prevent the waste of spraying in the air during a painting process through a real-time coating-size-recognition monitering to fit the target components. To achieve the two aims, a simplified version of automatic coating control system for recognition of coating for vessels and Spray. With the sytem, following effects are expected: First, quality improvement will be achieved by uniformizing the film-thickness. Second, it will reduce the waste of coating paint by constructing the speed of the coating, the spray gun robot transfer time, and the number of DBs according to the size of the vessel. Third, as a 3D industry, it will be able to solve the difficulty of supply of labors and save up the labor costs. Therefore, in the future, further research will be needed to be applied to various products with DB design that designates the variable value, which is added for each type of pieces by comparing the difference between various types of workpieces and linear ones.

A Relative Depth Estimation Algorithm Using Focus Measure (초점정보를 이용한 패턴간의 상대적 깊이 추정알고리즘 개발)

  • Jeong, Ji-Seok;Lee, Dae-Jong;Shin, Yong-Nyuo;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.527-532
    • /
    • 2013
  • Depth estimation is an essential factor for robot vision, 3D scene modeling, and motion control. The depth estimation method is based on focusing values calculated in a series of images by a single camera at different distance between lens and object. In this paper, we proposed a relative depth estimation method using focus measure. The proposed method is implemented by focus value calculated for each image obtained at different lens position and then depth is finally estimated by considering relative distance of two patterns. We performed various experiments on the effective focus measures for depth estimation by using various patterns and their usefulness.

Design and Implementation of Real-time Digital Twin in Heterogeneous Robots using OPC UA (OPC UA를 활용한 이기종 로봇의 실시간 디지털 트윈 설계 및 구현)

  • Jeehyeong Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.189-196
    • /
    • 2023
  • As the manufacturing paradigm shifts, various collaborative robots are creating new markets. Demand for collaborative robots is increasing in all industries for the purpose of easy operation, productivity improvement, and replacement of manpower who do simple tasks compared to existing industrial robots. However, accidents frequently occur during work caused by collaborative robots in industrial sites, threatening the safety of workers. In order to construct an industrial site through robots in a human-centered environment, the safety of workers must be guaranteed, and there is a need to develop a collaborative robot guard system that provides reliable communication without the possibility of dispatch. It is necessary to double prevent accidents that occur within the working radius of cobots and reduce the risk of safety accidents through sensors and computer vision. We build a system based on OPC UA, an international protocol for communication with various industrial equipment, and propose a collaborative robot guard system through image analysis using ultrasonic sensors and CNN (Convolution Neural Network). The proposed system evaluates the possibility of robot control in an unsafe situation for a worker.

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

Boundary Depth Estimation Using Hough Transform and Focus Measure (허프 변환과 초점정보를 이용한 경계면 깊이 추정)

  • Kwon, Dae-Sun;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.78-84
    • /
    • 2015
  • Depth estimation is often required for robot vision, 3D modeling, and motion control. Previous method is based on the focus measures which are calculated for a series of image by a single camera at different distance between and object. This method, however, has disadvantage of taking a long time for calculating the focus measure since the mask operation is performed for every pixel in the image. In this paper, we estimates the depth by using the focus measure of the boundary pixels located between the objects in order to minimize the depth estimate time. To detect the boundary of an object consisting of a straight line and a circle, we use the Hough transform and estimate the depth by using the focus measure. We performed various experiments for PCB images and obtained more effective depth estimation results than previous ones.

Design of Multi-Sensor-Based Open Architecture Integrated Navigation System for Localization of UGV

  • Choi, Ji-Hoon;Oh, Sang Heon;Kim, Hyo Seok;Lee, Yong Woo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.1 no.1
    • /
    • pp.35-43
    • /
    • 2012
  • The UGV is one of the special field robot developed for mine detection, surveillance and transportation. To achieve successfully the missions of the UGV, the accurate and reliable navigation data should be provided. This paper presents design and implementation of multi-sensor-based open architecture integrated navigation for localization of UGV. The presented architecture hierarchically classifies the integrated system into four layers and data communications between layers are based on the distributed object oriented middleware. The navigation manager determines the navigation mode with the QoS information of each navigation sensor and the integrated filter performs the navigation mode-based data fusion in the filtering process. Also, all navigation variables including the filter parameters and QoS of navigation data can be modified in GUI and consequently, the user can operate the integrated navigation system more usefully. The conventional GPS/INS integrated system does not guarantee the long-term reliability of localization when GPS solution is not available by signal blockage and intentional jamming in outdoor environment. The presented integration algorithm, however, based on the adaptive federated filter structure with FDI algorithm can integrate effectively the output of multi-sensor such as 3D LADAR, vision, odometer, magnetic compass and zero velocity to enhance the accuracy of localization result in the case that GPS is unavailable. The field test was carried out with the UGV and the test results show that the presented integrated navigation system can provide more robust and accurate localization performance than the conventional GPS/INS integrated system in outdoor environments.