• Title/Summary/Keyword: 3D Location

Search Result 1,368, Processing Time 0.034 seconds

User classification and location tracking algorithm using deep learning (딥러닝을 이용한 사용자 구분 및 위치추적 알고리즘)

  • Park, Jung-tak;Lee, Sol;Park, Byung-Seo;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.78-79
    • /
    • 2022
  • In this paper, we propose a technique for tracking the classification and location of each user through body proportion analysis of the normalized skeletons of multiple users obtained using RGB-D cameras. To this end, each user's 3D skeleton is extracted from the 3D point cloud and body proportion information is stored. After that, the stored body proportion information is compared with the body proportion data output from the entire frame to propose a user classification and location tracking algorithm in the entire image.

  • PDF

ILOCAT: an Interactive GUI Toolkit to Acquire 3D Positions for Indoor Location Based Services (ILOCAT: 실내 위치 기반 서비스를 위한 3차원 위치 획득 인터랙티브 GUI Toolkit)

  • Kim, Seokhwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.866-872
    • /
    • 2020
  • Indoor location-based services provide a service based on the distance between an object and a person. Recently, indoor location-based services are often implemented using inexpensive depth sensors such as Kinect. The depth sensor provides a function to measure the position of a person, but the position of an object must be acquired manually using a tool. To acquire a 3D position of an object, it requires 3D interaction, which is difficult to a general user. GUI(Graphical User Interface) is relatively easy to a general user but it is hard to gather a 3D position. This study proposes the Interactive LOcation Context Authoring Toolkit(ILOCAT), which enables a general user to easily acquire a 3D position of an object in real space using GUI. This paper describes the interaction design and implementation of ILOCAT.

Decoupled Location Parameter Estimation of 3-D Near-Field Sources in a Uniform Circular Array using the Rank Reduction Algorithm

  • Jung, Tae-Jin;Kwon, Bum-Soo;Lee, Kyun-Kyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • An algorithm is presented for estimating the 3-D location (i.e., azimuth angle, elevation angle, and range) of multiple sources with a uniform circular array (UCA) consisting of an even number of sensors. Recently the rank reduction (RARE) algorithm for partly-calibrated sensor arrays was developed. This algorithm is applicable to sensor arrays consisting of several identically oriented and calibrated linear subarrays. Assuming that a UCA consists of M sensors, it can be divided into M/2 identical linear subarrays composed of two facing sensors. Based on the structure of the subarrays, the steering vectors are decomposed into two parts: range-independent 2-D direction-of-arrival (DOA) parameters, and range-relevant 3-D location parameters. Using this property we can estimate range-independent 2-D DOAs by using the RARE algorithm. Once the 2-D DOAs are available, range estimation can be obtained for each source by defining the 1-D MUSIC spectrum. Despite its low computational complexity, the proposed algorithm can provide an estimation performance almost comparable to that of the 3-D MUSIC benchmark estimator.

Semantic Segmentation of Urban Scenes Using Location Prior Information (사전위치정보를 이용한 도심 영상의 의미론적 분할)

  • Wang, Jeonghyeon;Kim, Jinwhan
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.249-257
    • /
    • 2017
  • This paper proposes a method to segment urban scenes semantically based on location prior information. Since major scene elements in urban environments such as roads, buildings, and vehicles are often located at specific locations, using the location prior information of these elements can improve the segmentation performance. The location priors are defined in special 2D coordinates, referred to as road-normal coordinates, which are perpendicular to the orientation of the road. With the help of depth information to each element, all the possible pixels in the image are projected into these coordinates and the learned prior information is applied to those pixels. The proposed location prior can be modeled by defining a unary potential of a conditional random field (CRF) as a sum of two sub-potentials: an appearance feature-based potential and a location potential. The proposed method was validated using publicly available KITTI dataset, which has urban images and corresponding 3D depth measurements.

GPS-based 3D View Augmented Reality System for Smart Mobile Devices

  • Vo, Phuc;Choi, Chang Yeol
    • International Journal of Contents
    • /
    • v.9 no.1
    • /
    • pp.18-25
    • /
    • 2013
  • Recently, augmented reality has been proved immensely useful on a day to day basis when tied with location-based technology. In this paper, we present a new method for displaying augmented reality contents on mobile devices. We add 3D models on the view of the camera and use location-based services, motion sensors to calculate the transformation of models. Instead of remaining at a fixed position on camera view while moving around a 3D model, the model rotates on display in the opposite direction that the user is walking. We also design client as a ubiquitous client to reduce constraints on disk space and memory capacity on mobile devices. Implementation results show effective use in creating GPS-based 3D view augmented reality contents for Smart Mobile Devices.

Four Anchor Sensor Nodes Based Localization Algorithm over Three-Dimensional Space

  • Seo, Hwajeong;Kim, Howon
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.4
    • /
    • pp.349-358
    • /
    • 2012
  • Over a wireless sensor network (WSN), accurate localization of sensor nodes is an important factor in enhancing the association between location information and sensory data. There are many research works on the development of a localization algorithm over three-dimensional (3D) space. Recently, the complexity-reduced 3D trilateration localization approach (COLA), simplifying the 3D computational overhead to 2D trilateration, was proposed. The method provides proper accuracy of location, but it has a high computational cost. Considering practical applications over resource constrained devices, it is necessary to strike a balance between accuracy and computational cost. In this paper, we present a novel 3D localization method based on the received signal strength indicator (RSSI) values of four anchor nodes, which are deployed in the initial setup process. This method provides accurate location estimation results with a reduced computational cost and a smaller number of anchor nodes.

Extraction of location of 3-D object from CIIR method based on blur effect of reconstructed POI

  • Park, Seok-Chan;Kim, Seung-Cheol;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1363-1366
    • /
    • 2009
  • A new recognition method is used to find the three-dimensional target object on integral imaging. For finding the location of a target image, amount of reconstructed reference image is needed. This method is giving accurate information of target image by correlated among reconstructed target images and reference images.

  • PDF

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

Active Focusing Technique for Extracting Depth Information (액티브 포커싱을 이용한 3차원 물체의 깊이 계측)

  • 이용수;박종훈;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.2
    • /
    • pp.40-49
    • /
    • 1992
  • In this paper,a new approach-using the linear movement of the lens location in a camera and focal distance in each location for the measurement of the depth of the 3-D object from several 2-D images-is proposed. The sharply focused edges are extracted from the images obtained by moving the lens of the camera, that is, the distance between the lens and the image plane, in the range allowed by the camera lens system. Then the depthin formation of the edges are obtained by the lens location. In our method, the accurate and complicated control system of the camera and a special algorithm for tracing the accurate focus point are not necessary, and the method has some advantage that the depth of all objects in a scene are measured by only the linear movement of the lens location of the camera. The accuracy of the extracted depth information is approximately 5% of object distances between 1 and 2m. We can see the possibility of application of the method in the depth measurement of the 3-D objects.

  • PDF