• Title/Summary/Keyword: Google Street View

Search Result 8, Processing Time 0.021 seconds

Generation of Stage Tour Contents with Deep Learning Style Transfer (딥러닝 스타일 전이 기반의 무대 탐방 콘텐츠 생성 기법)

  • Kim, Dong-Min;Kim, Hyeon-Sik;Bong, Dae-Hyeon;Choi, Jong-Yun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1403-1410
    • /
    • 2020
  • Recently, as interest in non-face-to-face experiences and services increases, the demand for web video contents that can be easily consumed using mobile devices such as smartphones or tablets is rapidly increasing. To cope with these requirements, in this paper we propose a technique to efficiently produce video contents that can provide experience of visiting famous places (i.e., stage tour) in animation or movies. To this end, an image dataset was established by collecting images of stage areas using Google Maps and Google Street View APIs. Afterwards, a deep learning-based style transfer method to apply the unique style of animation videos to the collected street view images and generate the video contents from the style-transferred images was presented. Finally, we showed that the proposed method could produce more interesting stage-tour video contents through various experiments.

Virtual Walking Tour System (가상 도보 여행 시스템)

  • Kim, Han-Seob;Lee, Jieun
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.605-613
    • /
    • 2018
  • In this paper, we propose a system to walk around the world with virtual reality technology. Although the virtual reality users are interested in the virtual travel contents, the conventional virtual travel contents have limited space for experiencing and lack of interactivity. In order to solve the problem of lack of realism and limited space, which is a disadvantage of existing contents, this system created a virtual space using Google Street View image. Users can have realistic experience with real street images, and travel a vast area of the world provided by Google Street View image. Also, a virtual reality headset and a treadmill equipment are used so that the user can actually walk in the virtual space, which maxmizes user interactivity and immersion. We expect this system contributes to the leisure activities of virtual reality users by allowing natural walking trip from famous tourist spots to even mountain roads and alleys.

Updating Smartphone's Exterior Orientation Parameters by Image-based Localization Method Using Geo-tagged Image Datasets and 3D Point Cloud as References

  • Wang, Ying Hsuan;Hong, Seunghwan;Bae, Junsu;Choi, Yoonjo;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.331-341
    • /
    • 2019
  • With the popularity of sensor-rich environments, smartphones have become one of the major platforms for obtaining and sharing information. Since it is difficult to utilize GNSS (Global Navigation Satellite System) inside the area with many buildings, the localization of smartphone in this case is considered as a challenging task. To resolve problem of localization using smartphone a four step image-based localization method and procedure is proposed. To improve the localization accuracy of smartphone datasets, MMS (Mobile Mapping System) and Google Street View were utilized. In our approach first, the searching for candidate matching image is performed by the query image of smartphone's using GNSS observation. Second, the SURF (Speed-Up Robust Features) image matching between the smartphone image and reference dataset is done and the wrong matching points are eliminated. Third, the geometric transformation is performed using the matching points with 2D affine transformation. Finally, the smartphone location and attitude estimation are done by PnP (Perspective-n-Point) algorithm. The location of smartphone GNSS observation is improved from the original 10.204m to a mean error of 3.575m. The attitude estimation is lower than 25 degrees from the 92.4% of the adjsuted images with an average of 5.1973 degrees.

City-Scale Modeling for Street Navigation

  • Huang, Fay;Klette, Reinhard
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.4
    • /
    • pp.411-419
    • /
    • 2012
  • This paper proposes a semi-automatic image-based approach for 3-dimensional (3D) modeling of buildings along streets. Image-based urban 3D modeling techniques are typically based on the use of aerial and ground-level images. The aerial image of the relevant area is extracted from publically available sources in Google Maps by stitching together different patches of the map. Panoramic images are common for ground-level recording because they have advantages for 3D modeling. A panoramic video recorder is used in the proposed approach for recording sequences of ground-level spherical panoramic images. The proposed approach has two advantages. First, detected camera trajectories are more accurate and stable (compared to methods using multi-view planar images only) due to the use of spherical panoramic images. Second, we extract the texture of a facade of a building from a single panoramic image. Thus, there is no need to deal with color blending problems that typically occur when using overlapping textures.

A method for converting street view images to a video (스트리트 뷰 이미지를 동영상으로 변환하는 방법)

  • Woo, Byul;Park, Seong-Min;Lee, Do-Young;Jo, Seung-Hyun;Song, Yang-Eui;Lee, Yong Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.83-86
    • /
    • 2015
  • 기존의 스트리트 뷰 이미지들은 일일이 사용자가 검색해야 하며 이미지 파일을 하나씩 클릭해서 넘겨 봐야한다는 점과 제공되는 스트리트 뷰 이미지들이 웹페이지 형태로만 제공된다는 불편함이 있었다. 이를 개선하고자 본 논문에서는 스트리트 뷰 이미지를 동영상으로 제공하여 쉽게 길을 찾아갈 수 있는 방법을 제안한다. 본 논문에서 제안하는 어플리케이션은 사용자가 출발지와 도착지를 입력하면 Google Maps APIs를 이용해 최단경로를 받아온다. 그 후 최단경로에 해당하는 좌표값에 해당하는 이미지를 Google Maps APIs URL을 이용하여 Android 내부 DB로 받아온다. 마지막으로 DB에 저장된 이미지를 동영상으로 변환하여 제공한다.

Development of geoData Aquisition System for Panoramic Image Contents Service based on Location (위치기반 파노라마 영상 콘텐츠 서비스를 위한 geoData 취득 및 처리시스템 개발)

  • Cho, Hyeon-Koo;Lee, Hyung
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.438-447
    • /
    • 2011
  • geoContents have been closely related with personal life since the Google Earth and Street View by Google and the Road View by Daum were introduced. So, Location-based content, which is referred to geoContents, involving geometric spacial information and location-based image information is a sharp rise in demand. A mobile mapping system used in the area of map upgrade and road facility management has been having difficulties in satisfying the demand in the cost and time for obtaining these kinds of contents. This paper addresses geoData acquisition and processing system for producing panoramic images. The system consists of 3 devices: the first device is 3 GPS receivers for acquiring location information which is including position, attitude, orientation, and time. The second is 6 cameras for image information. And the last is to synchronize the both data. The geoData acquired by the proposed system and the method for authoring geoContents which are referred to a panoramic image with position, altitude, and orientation will be used as an effective way for establishing the various location-based content and providing them service area.

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

The Removal of Spatial Inconsistency between SLI and 2D Map for Conflation (SLI(Street-level Imagery)와 2D 지도간의 합성을 위한 위치 편차 제거)

  • Ga, Chill-O;Lee, Jeung-Ho;Yang, Sung-Chul;Yu, Ki-Yun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.2
    • /
    • pp.63-71
    • /
    • 2012
  • Recently, web portals have been offering georeferenced SLI(Street-Level Imagery) services, such as Google Streetview. The SLI has a distinctive strength over aerial images or vector maps because it gives us the same view as we see the real world on the street. Based on the characteristic, applicability of the SLI can be increased substantially through conflation with other spatial datasets. However, spatial inconsistency between different datasets is the main reason to decrease the quality of conflation when conflating them. Therefore, this research aims to remove the spatial inconsistency to conflate an SLI with a widely used 2D vector map. The removal of the spatial inconsistency is conducted through three sub-processes of (1) road intersection matching between the SLI trace and the road layer of the vector map for detecting CPPs(Control Point Pairs), (2) inaccurate CPPs filtering by analyzing the trend of the CPPs, and (3) local alignment using accurate CPPs. In addition, we propose an evaluation method suitable for conflation result including an SLI, and verify the effect of the removal of the spatial inconsistency.