• Title/Summary/Keyword: fusion of telecommunication and navigation

Search Result 4, Processing Time 0.023 seconds

A Study on Vehicular Positioning Technologies for Smart/Green Cars (스마트/그린형 자동차의 위치정보시스템에 관한 연구)

  • Ro, Kap-Seong;Oh, Jun-Seok;Dong, Liang
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.9 no.3
    • /
    • pp.92-101
    • /
    • 2010
  • Energy efficiency and safe mobility are the two key constituents of the future automobile. The technologies that enable these features are now heavily dependent upon information and communication technology rather than traditional auto-mechanical technology. This paper presents an exploratory project 'Smart&Green Vehicle Project' at Western Michigan University which is to improve the geographical location accuracy of vehicles and to study various applications of making such location data available. Global Positioning System (GPS), Inertial Navigation System (INS), Vehicular Ad-hoc Network (VANET) technology, and data fusion among these technologies are investigated. Testing and evaluation is done on systems which will gather vehicular positioning data during GPS signal loss. Vehicles in urban settings do not acquire accurate positioning data from GPS alone; therefore there is a need for exploration into technology that can assist GPS in urban settings. The goal of this project is to improve the accuracy of positioning data during a loss of GPS signal. Controlled experiments are performed to gather data which aided in assessing the feasibility of these technologies for use in vehicular platforms.

  • PDF

A Hybrid of Smartphone Camera and Basestation Wide-area Indoor Positioning Method

  • Jiao, Jichao;Deng, Zhongliang;Xu, Lianming;Li, Fei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.723-743
    • /
    • 2016
  • Indoor positioning is considered an enabler for a variety of applications, the demand for an indoor positioning service has also been accelerated. That is because that people spend most of their time indoor environment. Meanwhile, the smartphone integrated powerful camera is an efficient platform for navigation and positioning. However, for high accuracy indoor positioning by using a smartphone, there are two constraints that includes: (1) limited computational and memory resources of smartphone; (2) users' moving in large buildings. To address those issues, this paper uses the TC-OFDM for calculating the coarse positioning information includes horizontal and altitude information for assisting smartphone camera-based positioning. Moreover, a unified representation model of image features under variety of scenarios whose name is FAST-SURF is established for computing the fine location. Finally, an optimization marginalized particle filter is proposed for fusing the positioning information from TC-OFDM and images. The experimental result shows that the wide location detection accuracy is 0.823 m (1σ) at horizontal and 0.5 m at vertical. Comparing to the WiFi-based and ibeacon-based positioning methods, our method is powerful while being easy to be deployed and optimized.

Aerial Scene Labeling Based on Convolutional Neural Networks (Convolutional Neural Networks기반 항공영상 영역분할 및 분류)

  • Na, Jong-Pil;Hwang, Seung-Jun;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.6
    • /
    • pp.484-491
    • /
    • 2015
  • Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.

Automatic Building Extraction Using LIDAR and Aerial Image (LIDAR 데이터와 수치항공사진을 이용한 건물 자동추출)

  • Jeong, Jae-Wook;Jang, Hwi-Jeong;Kim, Yu-Seok;Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.59-67
    • /
    • 2005
  • Building information is primary source in many applications such as mapping, telecommunication, car navigation and virtual city modeling. While aerial CCD images which are captured by passive sensor(digital camera) provide horizontal positioning in high accuracy, it is far difficult to process them in automatic fashion due to their inherent properties such as perspective projection and occlusion. On the other hand, LIDAR system offers 3D information about each surface rapidly and accurately in the form of irregularly distributed point clouds. Contrary to the optical images, it is much difficult to obtain semantic information such as building boundary and object segmentation. Photogrammetry and LIDAR have their own major advantages and drawbacks for reconstructing earth surfaces. The purpose of this investigation is to automatically obtain spatial information of 3D buildings by fusing LIDAR data with aerial CCD image. The experimental results show that most of the complex buildings are efficiently extracted by the proposed method and signalize that fusing LIDAR data and aerial CCD image improves feasibility of the automatic detection and extraction of buildings in automatic fashion.

  • PDF