• Title/Summary/Keyword: Image Navigation

Search Result 704, Processing Time 0.026 seconds

Accuracy of Parcel Boundary Demarcation in Agricultural Area Using UAV-Photogrammetry (무인 항공사진측량에 의한 농경지 필지 경계설정 정확도)

  • Sung, Sang Min;Lee, Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.53-62
    • /
    • 2016
  • In recent years, UAV Photogrammetry based on an ultra-light UAS(Unmanned Aerial System) installed with a low-cost compact navigation device and a camera has attracted great attention through fast and accurate acquirement of geo-spatial data. In particular, UAV Photogrammetry do gradually replace the traditional aerial photogrammetry because it is able to produce DEMs(Digital Elevation Models) and Orthophotos rapidly owing to large amounts of high resolution image collection by a low-cost camera and image processing software combined with computer vision technique. With these advantages, UAV-Photogrammetry has therefore been applying to a large scale mapping and cadastral surveying that require accurate position information. This paper presents experimental results of an accuracy performance test with images of 4cm GSD from a fixed wing UAS to demarcate parcel boundaries in agricultural area. Consequently, the accuracy of boundary point extracted from UAS orthoimage has shown less than 8cm compared with that of terrestrial cadastral surveying. This means that UAV images satisfy the tolerance limit of distance error in cadastral surveying for the scale of 1: 500. And also, the area deviation is negligible small, about 0.2%(3.3m2), against true area of 1,969m2 by cadastral surveying. UAV-Photogrammetry is therefore as a promising technology to demarcate parcel boundaries.

Development and Usability Testing of a User-Centered 3D Virtual Liver Surgery Planning System

  • Yang, Xiaopeng;Yu, Hee Chul;Choi, Younggeun;Yang, Jae Do;Cho, Baik Hwan;You, Heecheon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.37-52
    • /
    • 2017
  • Objective: The present study developed a user-centered 3D virtual liver surgery planning (VLSP) system called Dr. Liver to provide preoperative information for safe and rational surgery. Background: Preoperative 3D VLSP is needed for patients' safety in liver surgery. Existing systems either do not provide functions specialized for liver surgery planning or do not provide functions for cross-check of the accuracy of analysis results. Method: Use scenarios of Dr. Liver were developed through literature review, benchmarking, and interviews with surgeons. User interfaces of Dr. Liver with various user-friendly features (e.g., context-sensitive hotkey menu and 3D view navigation box) was designed. Novel image processing algorithms (e.g., hybrid semi-automatic algorithm for liver extraction and customized region growing algorithm for vessel extraction) were developed for accurate and efficient liver surgery planning. Usability problems of a preliminary version of Dr. Liver were identified by surgeons and system developers and then design changes were made to resolve the identified usability problems. Results: A usability testing showed that the revised version of Dr. Liver achieved a high level of satisfaction ($6.1{\pm}0.8$ out of 7) and an acceptable time efficiency ($26.7{\pm}0.9 min$) in liver surgery planning. Conclusion: Involvement of usability testing in system development process from the beginning is useful to identify potential usability problems to improve for shortening system development period and cost. Application: The development and evaluation process of Dr. Liver in this study can be referred in designing a user-centered system.

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

The Integration of Segmentation Based Environment Models from Multiple Images (다중 영상으로부터 생성된 분할 기반 환경 모델들의 통합)

  • 류승택;윤경현
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.7
    • /
    • pp.1286-1301
    • /
    • 2003
  • This paper introduces segmentation based environment modeling method and integration method using multiple environment map for constructing the realtime image-based panoramic navigation system. The segmentation-based environment modeling method is easy to implement on the environment map and can be used for environment modeling by extracting the depth value by the segmentation of the environment map. However, an environment model that is constructed using a single environment map has the problem of a blurring effect caused by the fixed resolution, and the stretching effect of the 3D model caused when information that does not exist on the environment map occurs due to the occlusion. In this paper, we suggest environment models integration method using multiple environment map to resolve the above problem. This method can express parallax effect and expand the environment model to express wide range of environment. The segmentation-based environment modeling method using multiple environment map can build a detail model with optimal resolution.

  • PDF

Automatic identification of ARPA radar tracking vessels by CCTV camera system (CCTV 카메라 시스템에 의한 ARPA 레이더 추적선박의 자동식별)

  • Lee, Dae-Jae
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.45 no.3
    • /
    • pp.177-187
    • /
    • 2009
  • This paper describes a automatic video surveillance system(AVSS) with long range and 360$^{\circ}$ coverage that is automatically rotated in an elevation over azimuth mode in response to the TTM(tracked target message) signal of vessels tracked by ARPA(automatic radar plotting aids) radar. This AVSS that is a video security and tracking system supported by ARPA radar, CCTV(closed-circuit television) camera system and other sensors to automatically identify and track, detect the potential dangerous situations such as collision accidents at sea and berthing/deberthing accidents in harbor, can be used in monitoring the illegal fishing vessels in inshore and offshore fishing ground, and in more improving the security and safety of domestic fishing vessels in EEZ(exclusive economic zone) area. The movement of the target vessel chosen by the ARPA radar operator in the AVSS can be automatically tracked by a CCTV camera system interfaced to the ECDIS(electronic chart display and information system) with the special functions such as graphic presentation of CCTV image, camera position, camera azimuth and angle of view on the ENC, automatic and manual controls of pan and tilt angles for CCTV system, and the capability that can replay and record continuously all information of a selected target. The test results showed that the AVSS developed experimentally in this study can be used as an extra navigation aid for the operator on the bridge under the confusing traffic situations, to improve the detection efficiency of small targets in sea clutter, to enhance greatly an operator s ability to identify visually vessels tracked by ARPA radar and to provide a recorded history for reference or evidentiary purposes in EEZ area.

Performance Comparison of Wave Information Retrieval Algorithms Based on 3D Image Analysis Using VTS Sensor (VTS 센서를 이용한 3D영상 분석에 기초한 파랑 정보 추출 알고리즘 성능 비교)

  • Ryu, Joong-seon;Lim, Dong-hee;Kim, Jin-soo;Lee, Byung-Gil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.3
    • /
    • pp.519-526
    • /
    • 2016
  • As marine accidents happen frequently, it is required to establish a marine traffic monitoring system, which is designed to improve the safety and efficiency of navigation in VTS (Vessel Traffic Service). For this aim, recently, X-band marine radar is used for extracting the sea surface information and, it is necessary to retrieve wave information correctly and provide for the safe and efficient movement of vessel traffic within the VTS area. In this paper, three different current estimation algorithms including the classical least-squares (LS) fitting, a modified iterative least-square fitting routine and a normalized scalar product of variable current velocities are compared with buoy data and then, the iterative least-square method is modified to estimate wave information by improving the initial current velocity. Through several simulations with radar signals, it is shown that the proposed method is effective in retrieving the wave information compared to the conventional methods.

Obstacle Recognition by 3D Feature Extraction for Mobile Robot Navigation in an Indoor Environment (복도환경에서의 이동로봇 주행을 위한 3차원 특징추출을 통한 장애물 인식)

  • Jin, Tae-Seok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1987-1992
    • /
    • 2010
  • This paper deals with the method of using the three dimensional characteristic information to classify the front environment in travelling by using the images captured by a CCD camera equipped on a mobile robot. The images detected by the three dimensional characteristic information is divided into the part of obstacles, the part of corners, and th part of doorways in a corridor. In designing the travelling path of a mobile robot, these three situations are used as an important information in the obstacle avoidance and optimal path computing. So, this paper proposes the method of deciding the travelling direction of a mobile robot with using input images based upon the suggested algorithm by preprocessing, and verified the validity of the image information which are detected as obstacles by the analysis through neural network.

Research of Remote Inspection Method for River Bridge using Sonar and visual system (수중초음파와 광학영상의 하이브리드 시스템을 이용한 교각 수중부 원격점검 기법 연구)

  • Jung, Ju-Yeong;Yoon, Hyuk-Jin;Cho, Hyun-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.5
    • /
    • pp.330-335
    • /
    • 2017
  • This study applied SONAR(Sound Navigation And Ranging) to the inspection and evaluation of underwater structures. Anactual river bridge was chosen for inspection and evaluation. SONAR and an optical camera were operated together to analyze the underwater image of the bridge. SONAR images were obtained by various methods to remove the environmental variables from the field experiment, and it was confirmed that the reliability of detecting damaged areas on piers was decreased when using SONAR alone. The SONAR equipment and the optical camera can be used simultaneously to overcome the limitations of SONAR in inspecting underwater structures.These results can be used as basic data for the development of similar technologies for underwater structure inspection.

Technique of Sea-fog Removal base on GPU (GPU 기반의 해무제거 기술)

  • Choi, Woonsik;Ha, Jun;Youn, Woosang;Kwak, Jaemin;Choi, Hyunjun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.576-578
    • /
    • 2015
  • This paper propose the help of the secure a clear view and safe navigation of the coastal ship through the sea-fog removal algorithm. Interest in marine accidents and vessel safety has increased in recent Sewol ferry event. According to statistics coastal ship cause of the marine accident when sea fog on the sea did not secure clear view the ship's occur several incidents of collisions between ships and can see that accounts for a high percentage. Algorithm for image exist sea fog is number of studies. but, such studies take up a lot of calculation quantity in the course of performing the algorithm. In this paper, we improve the computational speed of sea fog over the GPU-based technique was removed to suit real-time video. Furthermore, by using GPU, we succeeded in accelerating the simulation 250 times.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.