• 제목/요약/키워드: object-based 3-D model

검색결과 332건 처리시간 0.02초

A Sketch-based 3D Object Retrieval Approach for Augmented Reality Models Using Deep Learning

  • 지명근;전준철
    • 인터넷정보학회논문지
    • /
    • 제21권1호
    • /
    • pp.33-43
    • /
    • 2020
  • Retrieving a 3D model from a 3D database and augmenting the retrieved model in the Augmented Reality system simultaneously became an issue in developing the plausible AR environments in a convenient fashion. It is considered that the sketch-based 3D object retrieval is an intuitive way for searching 3D objects based on human-drawn sketches as query. In this paper, we propose a novel deep learning based approach of retrieving a sketch-based 3D object as for an Augmented Reality Model. For this work, we introduce a new method which uses Sketch CNN, Wasserstein CNN and Wasserstein center loss for retrieving a sketch-based 3D object. Especially, Wasserstein center loss is used for learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. The proposed 3D object retrieval and augmentation consist of three major steps as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we adopt sketch-based object matching method to localize the natural marker of the images to register a 3D virtual object in AR system. Using the detected marker, the retrieved 3D virtual object is augmented in AR system automatically. By the experiments, we prove that the proposed method is efficiency for retrieving and augmenting objects.

AN AUTOMATED FORMWORK MODELING SYSTEM DEVELOPMENT FOR QUANTITY TAKE-OFF BASED ON BIM

  • Seong-Ah Kim;Sangyoon Chin;Su-Won Yoon;Tae-Hong Shin;Yea-Sang Kim;Cheolho Choi
    • 국제학술발표논문집
    • /
    • The 3th International Conference on Construction Engineering and Project Management
    • /
    • pp.1113-1116
    • /
    • 2009
  • The attempt to use a 3D model each field such as design, structure, construction, facilities, and estimation in the construction project has recently increased more and more while BIM (Building Information Modeling) that manages the process of generating and managing building data has risen during life cycle of a construction project. While the 2D Drawing based work of each field is achieved in the already existing construction project, the BIM based construction project aims at accomplishing 3D model based work of each field efficiently. Accordingly, the solution that fits 3D model based work of each field and supports plans in order to efficiently accomplish the relevant work is demanded. The estimation, one of the fields of the construction project, has applied BIM to calculate quantity and cost of the building materials used to construction works after taking off building quantity information from the 3D model by a item for a Quantity Take-off grouping the materials relevant to a 3D object. A 3D based estimation program has been commonly used in abroad advanced countries using BIM. The program can only calculate quantity related to one 3D object. In other words, it doesn't support the take-off process considering quantity of a contiguous object. In case of temporary materials used in the frame construction, there are instances where quantity is different by the contiguous object. For example, the formwork of the temporary materials quantity is changed by dimensions of the contiguous object because formwork of temporary materials goes through the quantity take-off process that deduces quantity of the connected object when different objects are connected. A worker can compulsorily adjust quantity so as to recognize the different object connected to the contiguous object and deduces quantity, but it mainly causes the confusion of work because it must complexly consider quantity of other materials related to the object besides. Therefore, this study is to propose the solution that automates the formwork 3D modeling to efficiently accomplish the quantity take-off of formwork by preventing the confusion of the work which is caused by the quantity deduction process between the contiguous object and the connected object.

  • PDF

PSC 박스 거더의 Recycle-Design을 고려한 3차원 객체 모델 구현 (Implementation of 3D Object Model considering Recycle-Design of PSC Box Girder)

  • 조성훈;박재근;이헌민;신현목
    • 한국전산구조공학회논문집
    • /
    • 제23권3호
    • /
    • pp.325-330
    • /
    • 2010
  • 현행 토목 설계분야에서는 BIM(Building Information Modeling)기반의 3차원 객체모델의 활용이 미미한 수준이다. 본 논문에서는 철도교량의 상부구조인 PSC 박스거더에 대하여 BIM기반의 3차원 객체 모델을 구성하였으며, 모델의 기초구성은 파트(Part)모델로 되어있다. 파트(Part)모델은 여러 가지 단위 모델 중 최소 단위이며, 이것은 설계대상 구조물의 특성을 반영하여 계층구조를 가진다. 3차원 객체 모델은 설계자의 설계변경의도를 신속하게 반영할 수 있어야 한다. 실제 설계과정에서는 반복적인 설계변경이 발생할 수 있기 때문이다. 이를 위하여 설계변수를 파라미터로 구분을 하였으며 그 파라미터들은 3차원 객체모델의 정보와 연계되어 있기 때문에 설계 변경에 신속하게 대응할 수 있다. 이 연구에서 우리는 3차원객체 모델을 토목설계분야에 활용하여 얻을 수 있는 이점을 고찰하였다. 또한 PSC 박스거더의 3차원 객체 모델의 효율적인 적용방안을 제시하였다.

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • 제23권1호
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Octree 모델에 근거한 고속 3차원 물체 인식 (Octree model based fast three-dimensional object recognition)

  • 이영재;박영태
    • 전자공학회논문지C
    • /
    • 제34C권9호
    • /
    • pp.84-101
    • /
    • 1997
  • Inferring and recognizing 3D objects form a 2D occuluded image has been an important research area of computer vision. The octree model, a hierarchical volume description of 3D objects, may be utilized to generate projected images from arbitrary viewing directions, thereby providing an efficient means of the data base for 3D object recognition. We present a fast algorithm of finding the 4 pairs of feature points to estimate the viewing direction. The method is based on matching the object contour to the reference occuluded shapes of 49 viewing directions. The initially best matched viewing direction is calibrated by searching for the 4 pairs of feature points between the input image and the image projected along the estimated viewing direction. Then the input shape is recognized by matching to the projectd shape. The computational complexity of the proposed method is shown to be O(n$^{2}$) in the worst case, and that of the simple combinatorial method is O(m$^{4}$.n$^{4}$) where m and n denote the number of feature points of the 3D model object and the 2D object respectively.

  • PDF

LSG:모델 기반 3차원 물체 인식을 위한 정형화된 국부적인 특징 구조 (LSG;(Local Surface Group); A Generalized Local Feature Structure for Model-Based 3D Object Recognition)

  • 이준호
    • 정보처리학회논문지B
    • /
    • 제8B권5호
    • /
    • pp.573-578
    • /
    • 2001
  • This research proposes a generalized local feature structure named "LSG(Local Surface Group) for model-based 3D object recognition". An LSG consists of a surface and its immediately adjacent surface that are simultaneously visible for a given viewpoint. That is, LSG is not a simple feature but a viewpoint-dependent feature structure that contains several attributes such as surface type. color, area, radius, and simultaneously adjacent surface. In addition, we have developed a new method based on Bayesian theory that computes a measure of how distinct an LSG is compared to other LSGs for the purpose of object recognition. We have experimented the proposed methods on an object databaed composed of twenty 3d object. The experimental results show that LSG and the Bayesian computing method can be successfully employed to achieve rapid 3D object recognition.

  • PDF

다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정 (2D-3D Pose Estimation using Multi-view Object Co-segmentation)

  • 김성흠;복윤수;권인소
    • 로봇학회논문지
    • /
    • 제12권1호
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

거리 기반 적응형 임계값을 활용한 강건한 3차원 물체 탐지 (Robust 3D Object Detection through Distance based Adaptive Thresholding)

  • 이은호;정민우;김종호;이경수;김아영
    • 로봇학회논문지
    • /
    • 제19권1호
    • /
    • pp.106-116
    • /
    • 2024
  • Ensuring robust 3D object detection is a core challenge for autonomous driving systems operating in urban environments. To tackle this issue, various 3D representation, including point cloud, voxels, and pillars, have been widely adopted, making use of LiDAR, Camera, and Radar sensors. These representations improved 3D object detection performance, but real-world urban scenarios with unexpected situations can still lead to numerous false positives, posing a challenge for robust 3D models. This paper presents a post-processing algorithm that dynamically adjusts object detection thresholds based on the distance from the ego-vehicle. While conventional perception algorithms typically employ a single threshold in post-processing, 3D models perform well in detecting nearby objects but may exhibit suboptimal performance for distant ones. The proposed algorithm tackles this issue by employing adaptive thresholds based on the distance from the ego-vehicle, minimizing false negatives and reducing false positives in the 3D model. The results show performance enhancements in the 3D model across a range of scenarios, encompassing not only typical urban road conditions but also scenarios involving adverse weather conditions.

디지털 트윈 구현을 위한 3차원 객체(건물) 갱신 및 구축 방안 연구 (Study on 3D Object (Building) Update and Construction Method for Digital Twin Implementation)

  • 곽병용;강병주
    • 산업경영시스템학회지
    • /
    • 제44권4호
    • /
    • pp.186-192
    • /
    • 2021
  • Recently, the demand for more precise and demand-oriented customized spatial information is increasing due to the 4th industrial revolution. In particular, the use of 3D spatial information and digital twins which based on spatial information, and research for solving social problems in cities by using such information are continuously conducted. Globally, non-face-to-face services are increasing due to COVID-19, and the national policy direction is also rapidly progressing digital transformation, digitization and virtualization of the Korean version of the New Deal, which means that 3D spatial information has become an important factor to support it. In this study, physical objects for cities defined by world organizations such as ISO, OGC, and ITU were selected and the target of the 3D object model was limited to buildings. Based on CityGML2.0, the data collected using a drone suitable for building a 3D model of a small area is selected to be updated through road name address and building ledger, which are administrative information related to this, and LoD2.5 data is constructed and urban space. It was intended to suggest an object update method for a 3D building among data.

모델 기반 카메라 추적에서 3차원 객체 모델링의 허용 오차 범위 분석 (Tolerance Analysis on 3-D Object Modeling Errors in Model-Based Camera Tracking)

  • 이은주;서병국;박종일
    • 방송공학회논문지
    • /
    • 제18권1호
    • /
    • pp.1-9
    • /
    • 2013
  • 모델 기반 카메라 추적에서 추적을 위해 사용되는 3차원 객체 모델의 정확도는 매우 중요하다. 하지만 3차원 객체의 실측 모델링은 일반적으로 정교한 작업을 요구할 뿐만 아니라, 오차 없이 모델링하기가 매우 어렵다. 반면에 오차를 포함하고 있는 3차원 객체 모델을 이용하더라도 모델링 오차에 의해서 계산되는 추적 오차와 실제 사용자의 육안으로 느끼는 추적 오차는 다를 수 있다. 이는 처리비용이 높은 정밀한 모델링 과정을 요구하지 않더라도 사용자가 느끼는 오차 허용 범위 내에서 추적을 위한 객체 모델링을 효과적으로 수행할 수 있기에 중요한 측면이 된다. 따라서 본 논문에서는 모델 기반 카메라 추적에서 모델링 오차에 따른 실제 정합 오차와 사용자의 육안으로 인지되는 정합 오차를 사용자 평가를 통해 비교 분석하고, 3차원 객체 모델링의 허용 오차 범위에 대해 논의한다.