• Title/Summary/Keyword: 3D Image Information

Search Result 2,441, Processing Time 0.031 seconds

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

The Development of Authoring Tool for 3D Virtual Space Based on a Virtual Space Map (가상공간지도 기반의 3차원 가상공간 저작도구의 개발)

  • Jung Il-Hong;Kim Eun-Ji
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.177-186
    • /
    • 2006
  • This paper presents the development of a certain highly efficient authoring tool for constructing realistic 3D virtual space using image-based rendering techniques based on a virtual space map. Unlike conventional techniques such as TIP, for constructing a small 3D virtual space using single image, the authoring tool developed herein produces a wide 3D virtual space using multiple images. This tool is designed for constructing each small 3D virtual space for each input image, and for interconnecting these 3D virtual spaces into a wide 3D virtual space using a virtual space map. The map consists of three elements such as specific room, link point and passageway, and three directions. It contains various information such as the connection structure, the navigation information and so on. Also, the tool contains a user interface that let users construct the wide 3D virtual space easily.

  • PDF

Extraction of location of 3-D object from CIIR method based on blur effect of reconstructed POI

  • Park, Seok-Chan;Kim, Seung-Cheol;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1363-1366
    • /
    • 2009
  • A new recognition method is used to find the three-dimensional target object on integral imaging. For finding the location of a target image, amount of reconstructed reference image is needed. This method is giving accurate information of target image by correlated among reconstructed target images and reference images.

  • PDF

A Web-Based Robot Simulator (웹 기반 로봇 시뮬레이터)

  • Hong, Soon-Hyuk;Lee, Sang-Hyun;Jeon, Jae-Wook;Yoon, Ji-Sup
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.3
    • /
    • pp.255-262
    • /
    • 2001
  • According to the advancement of web related technologies, many works on robots using these technologies, called web-based robots enables sharing of expensive equipments as well as control of remote robots. But none of the existing methods about web-based robots in-clude robot simulators in their web browser, which transfer appropriate information of a remote place to the local users. In this paper, a web-based robot simulator is proposed and developed to control a remote robot by using the web. The proposed simulator can transfer the 3D information about the remote robot to the local users by using 3D graphics, which has not been previously developed. Also, it sends the camera image of a remote place to the local users so that the users can use this camera image as well as 3D information in order to control the remote robot.

  • PDF

Construction of 3D Spatial Information of Vertical Structure by Combining UAS and Terrestrial LiDAR (UAS와 지상 LiDAR 조합에 의한 수직 구조물의 3차원 공간정보 구축)

  • Kang, Joon-Oh;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.49 no.2
    • /
    • pp.57-66
    • /
    • 2019
  • Recently, as a part of the production of spatial information by smart cities, three-dimensional reproduction of structures for reverse engineering has been attracting attention. In particular, terrestrial LiDAR is mainly used for 3D reproduction of structures, and 3D reproduction research by UAS has been actively conducted. However, both technologies produce blind spots due to the shooting angle. This study deals with vertical structures. 3D model implemented through SfM-based image analysis technology using UAS and reproducibility and effectiveness of 3D models by terrestrial LiDAR-based laser scanning are examined. In addition, two 3D models are merged and reviewed to complement the blind spot. For this purpose, UAS based image is acquired for artificial rock wall, VCP and check point are set through GNSS equipment and total station, and 3D model of structure is reproduced by using SfM based image analysis technology. In addition, Through 3D LiDAR scanning, the 3D point cloud of the structure was acquired, and the accuracy of reproduction and completeness of the 3D model based on the checkpoint were compared and reviewed with the UAS-based image analysis results. In particular, accuracy and realistic reproducibility were verified through a combination of point cloud constructed from UAS and terrestrial LiDAR. The results show that UAS - based image analysis is superior in accuracy and 3D model completeness and It is confirmed that accuracy improves with the combination of two methods. As a result of this study, it is expected that UAS and terrestrial LiDAR laser scanning combination can complement and reproduce precise three-dimensional model of vertical structure, so it can be effectively used for spatial information construction, safety diagnosis and maintenance management.

A standardization model based on image recognition for performance evaluation of an oral scanner

  • Seo, Sang-Wan;Lee, Wan-Sun;Byun, Jae-Young;Lee, Kyu-Bok
    • The Journal of Advanced Prosthodontics
    • /
    • v.9 no.6
    • /
    • pp.409-415
    • /
    • 2017
  • PURPOSE. Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. MATERIALS AND METHODS. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. RESULTS. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. CONCLUSION. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.

Application of Photo-realistic Modeling and Visualization Using Digital Image Data in 3D GIS (디지털 영상자료를 이용한 3D GIS의 사실적 모델링 및 가시화)

  • Jung, Sung-Heuk;Lee, Jae-Kee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.1
    • /
    • pp.73-83
    • /
    • 2008
  • For spatial analysis and decision-making based on territorial and urban information, technologies on 3D GIS with digital image data and photo-realistic 3D image models to visualize 3D modeling are being rapidly developed. Currently, satellite images, aerial images and aerial LiDAR data are mostly used to build 3D models and textures from oblique aerial photographs or terrestrial photographs are used to create 3D image models. However, we are in need of quality 3D image models as current models cannot express topographic and features most elaborately and realistically. Thus, this study analyzed techniques to use aerial photographs, aerial LiDAR, terrestrial photographs and terrestrial LiDAR to create a 3D image model with artificial features and special topographic that emphasize spatial accuracy, delicate depiction and photo-realistic imaging. A 3D image model with spatial accuracy and photographic texture was built to be served via 3D image map services systems on the Internet. As it was necessary to consider intended use and display scale when building 3D image models, in this study, we applied the concept of LoD(Level of Detail) to define 3D image model of buildings in five levels and established the models by following the levels.

On Enhancing the Image Quality of Dynamical X3D Contents in the Interne

  • Ha, Jong-Sung;Yoo, Kwan-Hee
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2007.02a
    • /
    • pp.53-57
    • /
    • 2007
  • This paper presents the practice of multitexturing in the Internet for enhancing the image quality of 3D contents, where the attributes of objects such as textures are dynamically changed. We explain the empirical results of realizing the X3D nodes related with multitexturing in the recent X3D viewers, and discuss the directions for upgrading X3D viewers that satisfy the user requirements and the advanced graphics accelerators.

  • PDF