• Title/Summary/Keyword: Image Navigation

Search Result 704, Processing Time 0.027 seconds

Implementation and Performance Analysis of High-availability System for Mission Computer (임무컴퓨터를 위한 고가용 시스템의 구현 및 성능분석)

  • Jeong, Jae-Yeop;Park, Seong-Jong;Lim, Jae-Seok;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.8
    • /
    • pp.47-56
    • /
    • 2008
  • MC(Mission Computer) performs important function in avionics system which tactic data processing, image processing and managing navigation system etc. In general, the fault of SPOF(Single Point Of Failure) in unity system can lead to failure of whole system. It can cause a failure of a mission and also can threaten to the life of the pilot. So, in this paper, we design the HA(Hight-availability) system so that dealing with the failure. And we use HA software like Heartbeat, Fake, DRBD and Bonding to manage HA system. Also we analyze the performance of HA system using the FDT(Fault Detection Time) for fast fault detection and MTTR(Mean Time To Repair) for mission continuity.

The Vessels Traffic Measurement and Real-time Track Assessment using Computer Vision (컴퓨터 비젼을 이용한 선박 교통량 측정 및 항적 평가)

  • Joo, Ki-Se;Jeong, Jung-Sik;Kim, Chol-Seong;Jeong, Jae-Yong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.17 no.2
    • /
    • pp.131-136
    • /
    • 2011
  • The furrow calculation and traffic measurement of sailing ship using computer vision are useful methods to prevent maritime accident by predicting the possibility of an accident occurrence in advance. In this paper, sailing ships are recognized using image erosion, differential operator and minimax value, which can be verified directly because the calculated coordinates are displayed on electronic navigation chart. The developed algorithm based on area information of this paper has the advantage which is compared to the conventional radar system focused on point information.

Virtual City System Based on 3D-Web GIS for U-City Construction (U-City 구현을 위한 3D-Web GIS 기반의 가상도시 시스템)

  • Jo, Byung-Wan;Lee, Yun-Sung;Yoon, Kwang-Won;Park, Jung-Hun
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.25 no.5
    • /
    • pp.389-395
    • /
    • 2012
  • U-City has been promoted nation-wide by development of recent IT technology. This paper studied the concept of 3D-virtual city in order to realize the current Ubiquitous City(U-City) efficiently, and to manage all the RFID/USN monitoring data in the real U-City. 3D-Virtual City is the concept of the reproduction of real world U-City, for embodying Ubiquitous technology while using Digital map, satellite image, VRML(Virtual Reality Modeling Language). U&V-City is the four-dimensional future city that real-time wire/wireless communication network and 3D-web GIS shall be connected that massive database, intelligent service be perceived through employing EAI(External Authoring Interface) that provides HTML&JAVA, and interface for efficient removal/process of massive information/ service and also by employing GPS/LBS/Navigation in support of world-wide orientation concept, and RTLS(Real Time Location System).

Design of Deep Learning-Based Automatic Drone Landing Technique Using Google Maps API (구글 맵 API를 이용한 딥러닝 기반의 드론 자동 착륙 기법 설계)

  • Lee, Ji-Eun;Mun, Hyung-Jin
    • Journal of Industrial Convergence
    • /
    • v.18 no.1
    • /
    • pp.79-85
    • /
    • 2020
  • Recently, the RPAS(Remote Piloted Aircraft System), by remote control and autonomous navigation, has been increasing in interest and utilization in various industries and public organizations along with delivery drones, fire drones, ambulances, agricultural drones, and others. The problems of the stability of unmanned drones, which can be self-controlled, are also the biggest challenge to be solved along the development of the drone industry. drones should be able to fly in the specified path the autonomous flight control system sets, and perform automatically an accurate landing at the destination. This study proposes a technique to check arrival by landing point images and control landing at the correct point, compensating for errors in location data of the drone sensors and GPS. Receiving from the Google Map API and learning from the destination video, taking images of the landing point with a drone equipped with a NAVIO2 and Raspberry Pi, camera, sending them to the server, adjusting the location of the drone in line with threshold, Drones can automatically land at the landing point.

Efficient Traffic Lights Detection and Signal Recognition in Moving Image (동영상에서 교통 신호등 위치 검출 및 신호인식 기법)

  • Oh, Seong;Kim, Jin-soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.717-719
    • /
    • 2015
  • The research and development of the unmanned vehicle is being carried out actively in domestic and foreign countries. The research is being carried out to provide various services so that the weakness of system such as conventional 2D-based navigation systems can be supplemented and the driving can be safer. This paper suggests the method that enables real-time video processing in more efficient way by realizing the location detection and signal recognition technique of traffic signals in video. In order to overcome the limit of conventional methods that have a difficulty in analyzing the signal as it is sensitive to brightness change, the proposed method realizes the program that grasps the depth data in front of the vehicle using video processing, analyzes the signal by detecting traffic signal and estimates color components of traffic signal in front and the distance between traffic signal and the vehicle.

  • PDF

Landmark Recognition Method based on Geometric Invariant Vectors (기하학적 불변벡터기반 랜드마크 인식방법)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, we propose a landmark recognition method which is irrelevant to the camera viewpoint on the navigation for localization. Features in previous research is variable to camera viewpoint, therefore due to the wealth of information, extraction of visual landmarks for positioning is not an easy task. The proposed method in this paper, has the three following stages; first, extraction of features, second, learning and recognition, third, matching. In the feature extraction stage, we set the interest areas of the image. where we extract the corner points. And then, we extract features more accurate and resistant to noise through statistical analysis of a small eigenvalue. In learning and recognition stage, we form robust feature models by testing whether the feature model consisted of five corner points is an invariant feature irrelevant to viewpoint. In the matching stage, we reduce time complexity and find correspondence accurately by matching method using similarity evaluation function and Graham search method. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed methods.

  • PDF

Construction of Three Dimensional Virtual City Information Using the Web 3D (Web 3D를 이용한 3차원 가상도시공간정보 구축)

  • 유환희;조정운;이학균
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.2
    • /
    • pp.119-126
    • /
    • 2002
  • Recently, as advancing the technologies for Web 3D and Virtual Reality, the studies have been progressed actively to provide three dimensional information on the web. Especially, the various applications for providing urban information in 3D space have been developed using EAI(External Authoring Interface) that serves an interface between VRML(Virtual Reality Modeling Language), standard language for embodying virtual reality, and JAVA applet in HTML. In this study, as constructing 3D virtual city information using Digital Map, IKONOS satellite image, VRML and so on, we could provide users which need several information with building location and various urban living information. In addition, applying 3D skills such as texturing, panorama and navigation, users were enabled to perform various route searching and scenery analysis. Finally, to serve urban living information in real time, we designed to search information faster through interfacing database and to update data using ASP(Active Server Page) on web.

A Study on Development of Visual Navigational Aids to improve Maritime Situation Awareness (해상상황인식 개선을 위한 시각적 항해보조장비 개발에 관한 연구)

  • Kim, Eun-Kyung;Im, Nam-Kyun;Han, Song-Hee;Jeong, Jung-Sik
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.379-385
    • /
    • 2012
  • This paper developes the navigation visual aid supporting a watch officer's situation awareness and analyzes its performance test result. Developing the equipment made from composite video sensor which transfer video signal, ranger laser measurement model which search out distance, Pan/ Tilt, center control device. The developed equipment with Pan/Tilt was made from high performance video sensor and ranger laser measurement. To make a real ship test, we carried on setting the developed equipment on ship, observed a danger factor and analyzed a image, and from that we can evaluate marine environment awareness. Through this result, the developed equipment can show effective ability of the awareness of the clearer check and resolution situation when compare with the binocular.

Design of a GCS System Supporting Vision Control of Quadrotor Drones (쿼드로터드론의 영상기반 자율비행연구를 위한 지상제어시스템 설계)

  • Ahn, Heejune;Hoang, C. Anh;Do, T. Tuan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.10
    • /
    • pp.1247-1255
    • /
    • 2016
  • The safety and autonomous flight function of micro UAV or drones is crucial to its commercial application. The requirement of own building stable drones is still a non-trivial obstacle for researchers that want to focus on the intelligence function, such vision and navigation algorithm. The paper present a GCS using commercial drone and hardware platforms, and open source software. The system follows modular architecture and now composed of the communication, UI, image processing. Especially, lane-keeping algorithm. are designed and verified through testing at a sports stadium. The designed lane-keeping algorithm estimates drone position and heading in the lane using Hough transform for line detection, RANSAC-vanishing point algorithm for selecting the desired lines, and tracking algorithm for stability of lines. The flight of drone is controlled by 'forward', 'stop', 'clock-rotate', and 'counter-clock rotate' commands. The present implemented system can fly straight and mild curve lane at 2-3 m/s.

The Camera Calibration Parameters Estimation using The Projection Variations of Line Widths (선폭들의 투영변화율을 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Moon, Sung-Young;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2372-2374
    • /
    • 2003
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as focal length, scale factor, pose, orientations, and distance. But, radial lens distortion is not modeled. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1,2,3,4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF