• Title/Summary/Keyword: Matterport

Search Result 8, Processing Time 0.022 seconds

A Study on Proposing a Virtual Tour Production Framework Using Matterport and Unity 3D (Matterport와 Unity 3D를 사용한 가상투어 제작 프레임워크에 관한 연구)

  • Min-Shik Kang
    • Journal of Practical Engineering Education
    • /
    • v.16 no.5_spc
    • /
    • pp.701-708
    • /
    • 2024
  • The challenges of exploring distant locations have become increasingly apparent, especially since COVID-19 has made visiting even nearby museums and attractions more difficult. Various factors contribute to this, including time constraints, financial limitations or language barriers. However, these challenges can be overcome with recent technologies by transforming how users experience and interact with spaces. With the latest advancements, it is now possible to create virtual representations of famous attractions, allowing users to take virtual tours as if they were physically present. This paper presents a comprehensive framework for creating virtual tours using Matterport, a leading 3D scanning technology, and Unity 3D, a powerful game engine. The paper outlines methodologies for capturing 3D data with Matterport, importing it into Unity, enhancing user interaction, and optimizing the overall experience. By integrating these tools, the framework aims to facilitate immersive virtual experiences. This approach will allow affordable virtual tickets for famous attractions around the world which will be great educational opportunities. In addition, this framework can be applicable to facilitate easier navigation in complex environments such as airports and hospitals.

Developing Virtual Tour Content for the Inside and Outside of a Building using Drones and Matterport

  • Tchomdji, Luther Oberlin Kwekam;Park, Soo-jin;Kim, Rihwan
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.74-84
    • /
    • 2022
  • The global impact of the Covid-19 pandemic on education has resulted in the near-complete closure of schools, early childhood education and care (ECEC) facilities, universities, and colleges. To help the educational system with social distancing during this pandemic, in this paper the creation of a simple 3D virtual tour will be of a great contribution. This web cyber tour will be program with JavaScript programming language. The development of this web cyber tour is to help the students and staffs to have access to the university infrastructure at a faraway distance during this difficult moment of the pandemic. The drone and matterport are the two devices used in the realization of this website tour. As a result, Users will be able to view a 3D model of the university building (drone) as well as a real-time tour of its inside (matterport) before uploading the model for real-time display by the help of this website tour. Since the users can enjoy the 3D model of the university infrastructure with all angles at a far distance through the website, it will solve the problem of Covid-19 infection in the university. It will also provide students who cannot be present on-site, with detailed information about the campus.

Hybrid Learning for Vision-and-Language Navigation Agents (시각-언어 이동 에이전트를 위한 복합 학습)

  • Oh, Suntaek;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.281-290
    • /
    • 2020
  • The Vision-and-Language Navigation(VLN) task is a complex intelligence problem that requires both visual and language comprehension skills. In this paper, we propose a new learning model for visual-language navigation agents. The model adopts a hybrid learning that combines imitation learning based on demo data and reinforcement learning based on action reward. Therefore, this model can meet both problems of imitation learning that can be biased to the demo data and reinforcement learning with relatively low data efficiency. In addition, the proposed model uses a novel path-based reward function designed to solve the problem of existing goal-based reward functions. In this paper, we demonstrate the high performance of the proposed model through various experiments using both Matterport3D simulation environment and R2R benchmark dataset.

LVLN : A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation (LVLN: 시각-언어 이동을 위한 랜드마크 기반의 심층 신경망 모델)

  • Hwang, Jisu;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.9
    • /
    • pp.379-390
    • /
    • 2019
  • In this paper, we propose a novel deep neural network model for Vision-and-Language Navigation (VLN) named LVLN (Landmark-based VLN). In addition to both visual features extracted from input images and linguistic features extracted from the natural language instructions, this model makes use of information about places and landmark objects detected from images. The model also applies a context-based attention mechanism in order to associate each entity mentioned in the instruction, the corresponding region of interest (ROI) in the image, and the corresponding place and landmark object detected from the image with each other. Moreover, in order to improve the success rate of arriving the target goal, the model adopts a progress monitor module for checking substantial approach to the target goal. Conducting experiments with the Matterport3D simulator and the Room-to-Room (R2R) benchmark dataset, we demonstrate high performance of the proposed model.

Real-Time Visual Grounding for Natural Language Instructions with Deep Neural Network (심층 신경망을 이용한 자연어 지시의 실시간 시각적 접지)

  • Hwang, Jisu;Kim, Incheol
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.487-490
    • /
    • 2019
  • 시각과 언어 기반의 이동(VLN)은 3차원 실내 환경에서 실시간 입력 영상과 자연어 지시들을 이해함으로써, 에이전트 스스로 목적지까지 이동해야 하는 인공지능 문제이다. 이 문제는 에이전트의 영상 및 자연어 이해 능력뿐만 아니라, 상황 추론과 행동 계획 능력도 함께 요구하는 복합 지능 문제이다. 본 논문에서는 시각과 언어 기반의 이동(VLN) 작업을 위한 새로운 심층 신경망 모델을 제안한다. 제안모델에서는 입력 영상에서 합성곱 신경망을 통해 추출하는 시각적 특징과 자연어 지시에서 순환 신경망을 통해 추출하는 언어적 특징 외에, 자연어 지시에서 언급하는 장소와 랜드마크 물체들을 영상에서 별도로 탐지해내고 이들을 추가적으로 행동 선택을 위한 특징들로 이용한다. 다양한 3차원 실내 환경들을 제공하는 Matterport3D 시뮬레이터와 Room-to-Room(R2R) 벤치마크 데이터 집합을 이용한 실험들을 통해, 본 논문에서 제안하는 모델의 높은 성능과 효과를 확인할 수 있었다.

Lookahead Place Memory for Vision-Language Navigation Tasks (시각-언어 이동 작업을 위한 장소 미리보기 메모리)

  • Oh, Suntaek;Kim, Incheol
    • Annual Conference of KIPS
    • /
    • 2020.11a
    • /
    • pp.992-995
    • /
    • 2020
  • 시각-언어 이동 작업은 에이전트가 주어진 지시를 따라 특정 실내 공간 내에서 목적 위치로 이동하는 작업이다. 시각-언어 이동 작업의 특성상 자연어 지시 속에 등장하는 랜드마크인 장소 정보를 인지하는 것은 작업을 수행하는 데 큰 도움이 된다. 본 논문에서는 환경을 구성하는 주요 장소 정보를 저장하기 위한 장소 미리보기 메모리를 제안한다. 에이전트는 장소 미리보기 메모리에 저장된 장소 정보를 고려하여 작업을 수행하게 된다. 본 논문에서는 Matterport3D 시뮬레이션 환경에서의 실험을 통해 R2R 벤치마크 데이터 집합에서 가장 높은 성능을 보였다.

Combining Imitation Learning and Reinforcement Learning for Visual-Language Navigation Agents (시각-언어 이동 에이전트를 위한 모방 학습과 강화 학습의 결합)

  • Oh, Suntaek;Kim, Incheol
    • Annual Conference of KIPS
    • /
    • 2020.05a
    • /
    • pp.559-562
    • /
    • 2020
  • 시각-언어 이동 문제는 시각 이해와 언어 이해 능력을 함께 요구하는 복합 지능 문제이다. 본 논문에서는 시각-언어 이동 에이전트를 위한 새로운 학습 모델을 제안한다. 이 모델은 데모 데이터에 기초한 모방 학습과 행동 보상에 기초한 강화 학습을 함께 결합한 복합 학습을 채택하고 있다. 따라서 이 모델은 데모 데이타에 편향될 수 있는 모방 학습의 문제와 상대적으로 낮은 데이터 효율성을 갖는 강화 학습의 문제를 상호 보완적으로 해소할 수 있다. 또한, 제안 모델은 서로 다른 두 학습 간에 발생 가능한 학습 불균형도 고려하여 손실 정규화를 포함하고 있다. 또, 제안 모델에서는 기존 연구들에서 사용되어온 목적지 기반 보상 함수의 문제점을 발견하고, 이를 해결하기 위해 설계된 새로은 최적 경로 기반 보상 함수를 이용한다. 본 논문에서는 Matterport3D 시뮬레이션 환경과 R2R 벤치마크 데이터 집합을 이용한 다양한 실들을 통해, 제안 모델의 높은 성능을 입증하였다.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.