• Title/Summary/Keyword: AR(Augmented Reality)

Search Result 708, Processing Time 0.025 seconds

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF

AI-Based Object Recognition Research for Augmented Reality Character Implementation (증강현실 캐릭터 구현을 위한 AI기반 객체인식 연구)

  • Seok-Hwan Lee;Jung-Keum Lee;Hyun Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1321-1330
    • /
    • 2023
  • This study attempts to address the problem of 3D pose estimation for multiple human objects through a single image generated during the character development process that can be used in augmented reality. In the existing top-down method, all objects in the image are first detected, and then each is reconstructed independently. The problem is that inconsistent results may occur due to overlap or depth order mismatch between the reconstructed objects. The goal of this study is to solve these problems and develop a single network that provides consistent 3D reconstruction of all humans in a scene. Integrating a human body model based on the SMPL parametric system into a top-down framework became an important choice. Through this, two types of collision loss based on distance field and loss that considers depth order were introduced. The first loss prevents overlap between reconstructed people, and the second loss adjusts the depth ordering of people to render occlusion inference and annotated instance segmentation consistently. This method allows depth information to be provided to the network without explicit 3D annotation of the image. Experimental results show that this study's methodology performs better than existing methods on standard 3D pose benchmarks, and the proposed losses enable more consistent reconstruction from natural images.

Research on Bridge Maintenance Methods Using BIM Model and Augmented Reality (BIM 모델과 증강현실을 활용한 교량 유지관리방안 연구)

  • Choi, Woonggyu;Pa Pa Win Aung;Sanyukta Arvikar;Cha, Gichun;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • Bridges, which are construction structures, have increased from 584 to 38,405 since the 1970s. However, as the number of bridges increases, the number of bridges with a service life of more than 30 years increases to 21,737 (71%) by 2030, resulting in fatal accidents due to basic human resource maintenance of facilities. Accordingly, the importance of bridge safety inspection and maintenance measures is increasing, and the need for decision-making support for supervisors who manage multiple bridges is also required. Currently, the safety inspection and maintenance method of bridges is to write down damage, condition, location, and specifications on the exterior survey map by hand or to record them by taking pictures with a camera. However, errors in notation of damage or defects or mistakes by supervisors are possible, typos, etc. may reduce the reliability of the overall safety inspection and diagnosis. To improve this, this study visualizes damage data recorded in the BIM model in an AR environment and proposes a maintenance plan for bridges with a small number of people through maintenance decision-making support for supervisors.

A Real-time Particle Filtering Framework for Robust Camera Tracking in An AR Environment (증강현실 환경에서의 강건한 카메라 추적을 위한 실시간 입자 필터링 기법)

  • Lee, Seok-Han
    • Journal of Digital Contents Society
    • /
    • v.11 no.4
    • /
    • pp.597-606
    • /
    • 2010
  • This paper describes a real-time camera tracking framework specifically designed to track a monocular camera in an AR workspace. Typically, the Kalman filter is often employed for the camera tracking. In general, however, tracking performances of conventional methods are seriously affected by unpredictable situations such as ambiguity in feature detection, occlusion of features and rapid camera shake. In this paper, a recursive Bayesian sampling framework which is also known as the particle filter is adopted for the camera pose estimation. In our system, the camera state is estimated on the basis of the Gaussian distribution without employing additional uncertainty model and sample weight computation. In addition, the camera state is directly computed based on new sample particles which are distributed according to the true posterior of system state. In order to verify the proposed system, we conduct several experiments for unstable situations in the desktop AR environments.

AR based Field Training System Algorithm for Small Units (증강현실 기반의 소부대 야외 전술훈련체계 알고리즘)

  • Park, Sangjun;Kim, Jee Won;Kim, Kyoung Min;Kim, Hoedong
    • Convergence Security Journal
    • /
    • v.18 no.4
    • /
    • pp.81-87
    • /
    • 2018
  • Military training is being carried out to win the combats through the quick and accurate response exercise for changeable engagement situations on battle fields. However, practically, it is really difficult to do actual fight training. Even though ROK Army is doing effort for practical training in KCTC(Korea army advanced Combat Training Center) supplying such as MILES equipments but a single platoon is able to use KCTC facilities or MILES equipments only 10 days a year. In order to find solution on this problem many researches suggesting AR or VR technology are still on the way. Nevertheless these are not fully covered the training done in the real field. In this regard, this paper proposes how the AR technology algorithm to apply on small units during field training exercise.

  • PDF

QR code invoice system with AR (AR을 이용한 QR code 송장 시스템)

  • Kim, Sohee;Yang, Yujin;Jeon, Soohyun;Kim, Dongho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.331-334
    • /
    • 2021
  • 기존의 택배 배송시스템은 수령인 본인이 아니더라도 주소, 전화번호와 같은 개인정보를 쉽게 확인할 수 있다. 코로나 19로 인해 언택트(Untact) 주문 및 배달, 배송 서비스가 급격히 늘어나면서 택배 배송 관련 사업은 거대한 시장으로 성장하고 있으며 이와 함께 노출된 개인정보가 범죄에 악용될 수 있다는 우려도 증가하고 있다. 더불어 여러 택배 및 배송물이 도착했을 때, 수신자는 택배 상자를 뜯지 않고 배송물의 오도착 여부를 확인하기 어려우며 원하는 물품이 담긴 택배가 정확히 어떤 것인지 알기 힘들다.본 프로젝트에서는 다단계 인증이 가능한 QR code를 활용해 송수신자의 주소, 제품 종류, 명칭 등을 포함한 여러 정보를 배송기사, 수령인 등에 따라 선택적으로 접근한다. 같은 QR code를 스캔하더라도 수령인의 경우 모든 정보를 확인할 수 있고, 배달원은 일부의 정보를 확인할 수 있지만, 권한이 없는 사람은 어떠한 정보도 확인할 수 없다. 기존의 택배 배송시스템처럼 정보를 맨눈으로 인식할 수도 없으므로 개인정보 노출의 한계를 극복할 수 있다. 이때 송장 정보는 텍스트 형태뿐 아니라 주문한 내용물의 종류 및 모양 등을 그대로 구현한 AR(augmented reality) 형태로도 확인할 수 있어 포장된 상태 그대로 배송물의 오도착 여부를 확인하거나 다량의 택배를 보다 효율적으로 구분할 수 있다. 이처럼 같은 QR code로 서로 다른 정보를 제공하는 SQRC(Security/Secure QR code)의 원리를 이용해 정보를 안전하게 보호하는 것에 그치지 않고, 비디오나 이미지와 같은 멀티미디어 서비스를 추가로 제공해 실감 미디어의 적용 범위를 넓힐 수 있다.

  • PDF

Prefetching Techniques of Efficient Continuous Spatial Queries on Mobile AR (모바일 AR에서 효율적인 연속 공간 질의를 위한 프리패칭 기법)

  • Yang, Pyoung Woo;Jung, Yong Hee;Han, Jeong Hye;Lee, Yon Sik;Nam, Kwang Woo
    • Spatial Information Research
    • /
    • v.21 no.4
    • /
    • pp.83-89
    • /
    • 2013
  • Recently various contents have been produced using the techniques that require high-performance computing process. A lot of services have been being producted as AR(Augmented Reality) service being combined with mobile information service that a moving user search various information based on one's location with. Mobile information service has a characteristic that it needs to get new information according to the location an user moves to. The characteristic requires a lot of communications when user search information moving to a different location. In order to make up for this drawback, we propose a prefetching technique based on speed and viewing angle in this paper. Existing prefetching techniques retrieve the following location of users considering moving speed and direction of the users. The data showed on the screen in AR is limited by the viewing angle of the mobile device. Due to the problems we discussed above, existing prefetching techniques have a demerit that they retrieve a lot more data than needed actually. We propose more efficient way of retrieving data with AR using the viewing angle of the mobile device. The method we propose reduces retrieval of unnecessary location using the users' speed, direction and viewing angle. This method is more efficient than the existing ways of retrieval because we don't need as many data.

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

Depth Image based Egocentric 3D Hand Pose Recognition for VR Using Mobile Deep Residual Network (모바일 Deep Residual Network을 이용한 뎁스 영상 기반 1 인칭 시점 VR 손동작 인식)

  • Park, Hye Min;Park, Na Hyeon;Oh, Ji Heon;Lee, Cheol Woo;Choi, Hyoung Woo;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1137-1140
    • /
    • 2019
  • 가상현실(Virtual Reality, VR), 증강현실(Augmented Reality, AR), 혼합현실(Mixed Reality, MR) 분야에 유용한 인간 컴퓨터 인터페이스 기술은 필수적이다. 특히 휴먼 손동작 인식 기술은 직관적인 상호작용을 가능하게 하여, 다양한 분야에서 편리한 컨트롤러로 사용할 수 있다. 본 연구에서는 뎁스 영상 기반의 1 인칭 시점 손동작 인식을 위하여 손동작 데이터베이스 생성 시스템을 구축하여, 손동작 인식기 학습에 필요한 1 인칭(Egocentric View Point) 데이터베이스를 촬영하여 제작한다. 그리고 모바일 Head Mounted Device(HMD) VR 을 위한 뎁스 영상 기반 1 인칭 시점 손동작 인식(Hand Pose Recognition, HPR) 딥러닝 Deep Residual Network 를 구현한다. 최종적으로, 안드로이드 모바일 디바이스에 학습된 Residual Network Regressor 를 이식하고 모바일 VR 에 실시간 손동작 인식 시스템을 구동하여, 모바일 VR 상 실시간 3D 손동작 인식을 가상 물체와의 상호작용을 통하여 확인 한다.

A Case Study on the Effectiveness of tDCS to Reduce Cyber-Sickness in Subjects with Dizziness

  • Chang Ju Kim;Yoon Tae Hwang;Yu Min Ko;Seong Ho Yun;Sang Seok Yeo
    • The Journal of Korean Physical Therapy
    • /
    • v.36 no.1
    • /
    • pp.39-44
    • /
    • 2024
  • Purpose: Cybersickness is a type of motion sickness induced by virtual reality (VR) or augmented reality (AR) environments that presents symptoms including nausea, dizziness, and headaches. This study aimed to investigate how cathodal transcranial direct current stimulation (tDCS) alleviates motion sickness symptoms and modulates brain activity in individuals experiencing cybersickness after exposure to a VR environment. Methods: This study was performed on two groups of healthy adults with cybersickness symptoms. Subjects were randomly assigned to receive either cathodal tDCS intervention or sham tDCS intervention. Brain activity during VR stimulation was measured by 38-channel functional near-infrared spectroscopy (fNIRS). tDCS was administered to the right temporoparietal junction (TPJ) for 20 minutes at an intensity of 2mA, and the severity of cybersickness was assessed pre- and post-intervention using a simulator sickness questionnaire (SSQ). Result: Following the experiment, cybersickness symptoms in subjects who received cathodal tDCS intervention were reduced based on SSQ scores, whereas those who received sham tDCS showed no significant change. fNIRS analysis revealed that tDCS significantly diminished cortical activity in subjects with high activity in temporal and parietal lobes, whereas high cortical activity was maintained in these regions after intervention in subjects who received sham tDCS. Conclusion: These findings suggest that cathodal tDCS applied to the right TPJ region in young adults experiencing cybersickness effectively reduces motion sickness induced by VR environments.