• 제목/요약/키워드: 3-D range image

검색결과 381건 처리시간 0.031초

Development of Structured Light 3D Scanner Based on Image Processing

  • Kim, Kyu-Ha;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제11권4호
    • /
    • pp.49-58
    • /
    • 2019
  • 3D scanners are needed in various fields, and their usage range is greatly expanded. In particular, it is being used to reduce costs at various stages during product development and production. Now, the importance of quality inspection in the manufacturing industry is increasing. Structured optical system applied in this study is suitable for measuring high precision of mold, press work, precision products, etc. and economical and effective 3D scanning system for measuring inspection in manufacturing industry can be implemented. We developed Structured light 3D scanner which can measure high precision by using Digital Light Processing (DLP) projector and camera. In this paper, 3D image scanner based on structured optical system can realize 3D scanning system economically and effectively when measuring inspection in the manufacturing industry.

REAL-TIME 3D MODELING FOR ACCELERATED AND SAFER CONSTRUCTION USING EMERGING TECHNOLOGY

  • Jochen Teizer;Changwan Kim;Frederic Bosche;Carlos H. Caldas;Carl T. Haas
    • 국제학술발표논문집
    • /
    • The 1th International Conference on Construction Engineering and Project Management
    • /
    • pp.539-543
    • /
    • 2005
  • The research presented in this paper enables real-time 3D modeling to help make construction processes ultimately faster, more predictable and safer. Initial research efforts used an emerging sensor technology and proved its usefulness in the acquisition of range information for the detection and efficient representation of static and moving objects. Based on the time-of-flight principle, the sensor acquires range and intensity information of each image pixel within the entire sensor's field-of-view in real-time with frequencies of up to 30 Hz. However, real-time working range data processing algorithms need to be developed to rapidly process range information into meaningful 3D computer models. This research ultimately focuses on the application of safer heavy equipment operation. The paper compares (a) a previous research effort in convex hull modeling using sparse range point clouds from a single laser beam range finder, to (b) high-frame rate update Flash LADAR (Laser Detection and Ranging) scanning for complete scene modeling. The presented research will demonstrate if the FlashLADAR technology can play an important role in real-time modeling of infrastructure assets in the near future.

  • PDF

OpenCL 기반 근사곡면 렌즈어레이 시스템의 설계 및 구현 (Design and Implementation of an Approximate Surface Lens Array System based on OpenCL)

  • 김도형;송민호;정지성;권기철;김남;김경아;류관희
    • 한국콘텐츠학회논문지
    • /
    • 제14권10호
    • /
    • pp.1-9
    • /
    • 2014
  • 무안경식 3D 디스플레이를 위해 사용되는 집적영상은 일반적으로 평면 렌즈어레이로부터 생성되고 있으나, 좁은 시야각으로 인해 관찰자에게 넓은 시야영역을 제공하지 못한다. 이러한 단점을 보완하기 위해 곡면 렌즈어레이가 제안되었으며, 기술적, 비용적 한계로 인해 이상적인 곡면 렌즈어레이보다는 여러 개의 평면렌즈들을 곡면 유형으로 만든 근사곡면(Approximate Surface) 렌즈어레이가 사용된다. 본 논문에서는 반경 100mm의 구에 $20{\times}8$개의 사각형 평면 렌즈들을 배치하여 근사곡면 렌즈어레이를 구성하였으며, 그 결과 약 2배의 시야각을 넓힐 수 있었다. 특히, 기존연구에서는 집적영상을 수작업으로 만들어내고 있었으나, 본 논문에서는 집적영상을 실시간으로 생성하는 OpenCL GPU 병렬 처리 알고리즘을 제안한다. 그 결과, 다양한 3D 볼륨데이터에 대하여 $15{\times}15$ 크기의 근사곡면 렌즈어레이로부터 집적영상을 12-20 frame/sec 속도로 생성할 수 있었다.

Low Cost Omnidirectional 2D Distance Sensor for Indoor Floor Mapping Applications

  • Kim, Joon Ha;Lee, Jun Ho
    • Current Optics and Photonics
    • /
    • 제5권3호
    • /
    • pp.298-305
    • /
    • 2021
  • Modern distance sensing methods employ various measurement principles, including triangulation, time-of-flight, confocal, interferometric and frequency comb. Among them, the triangulation method, with a laser light source and an image sensor, is widely used in low-cost applications. We developed an omnidirectional two-dimensional (2D) distance sensor based on the triangulation principle for indoor floor mapping applications. The sensor has a range of 150-1500 mm with a relative resolution better than 4% over the range and 1% at 1 meter distance. It rotationally scans a compact one-dimensional (1D) distance sensor, composed of a near infrared (NIR) laser diode, a folding mirror, an imaging lens, and an image detector. We designed the sensor layout and configuration to satisfy the required measurement range and resolution, selecting easily available components in a special effort to reduce cost. We built a prototype and tested it with seven representative indoor wall specimens (white wallpaper, gray wallpaper, black wallpaper, furniture wood, black leather, brown leather, and white plastic) in a typical indoor illuminated condition, 200 lux, on a floor under ceiling mounted fluorescent lamps. We confirmed the proposed sensor provided reliable distance reading of all the specimens over the required measurement range (150-1500 mm) with a measurement resolution of 4% overall and 1% at 1 meter, regardless of illumination conditions.

스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정 (Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features)

  • 길세기;이종실;유제군;이응혁;홍승홍;신동범
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제55권4호
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

Self-Organizing Neural Network를 이용한 임펄스 노이즈 검출과 선택적 미디언 필터 적용 (Impulse Noise Detection Using Self-Organizing Neural Network and Its Application to Selective Median Filtering)

  • 이종호;동성수;위재우;송승민
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제54권3호
    • /
    • pp.166-173
    • /
    • 2005
  • Preserving image features, edges and details in the process of impulsive noise filtering is an important problem. To avoid image blurring, only corrupted pixels must be filtered. In this paper, we propose an effective impulse noise detection method using Self-Organizing Neural Network(SONN) which applies median filter selectively for removing random-valued impulse noises while preserving image features, edges and details. Using a $3\times3$ window, we obtain useful local features with which impulse noise patterns are classified. SONN is trained with sample image patterns and each pixel pattern is classified by its local information in the image. The results of the experiments with various images which are the noise range of $5-15\%$ show that our method performs better than other methods which use multiple threshold values for impulse noise detection.

구동부 변위의 보상이 가능한 지능형 대형 3D 프린터 개발 (Development of large-scale 3D printer with position compensation system)

  • 이우송;박성진;박인수
    • 한국산업융합학회 논문집
    • /
    • 제22권3호
    • /
    • pp.293-301
    • /
    • 2019
  • Based on accurate image processing technology, a system for measuring displacement in ${\mu}m$ for drive error (position error, straightness error, flatness error) at a distance using parallel light and image sensor is developed, and a system for applying this technology development to a large 3D rapid prototyping machine and compensating in real time is developed to dramatically reduce the range of measurement error and enable intelligent 3D production of high quality products.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • 제2권1호
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권7호
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

근거리 레이더용 K대역 다채널 전단 수신기 설계 및 제작 (Design and Fabrication of K-band multi-channel receiver for short-range RADAR)

  • 김상일;이승준;이정수;이복형
    • 한국통신학회논문지
    • /
    • 제37권7A호
    • /
    • pp.545-551
    • /
    • 2012
  • 본 논문에서는 K대역의 신호를 수신하여 저잡음 증폭 및 L대역으로 하향 변환하는 다채널 전단 수신기를 설계 및 제작하였다. 제작된 다채널 전단 수신기는 잡음지수 2.3 dB 이하의 특성을 가지는 GaAs-HEMT 기반의 저잡음 증폭소자, 이미지 성분의 억제를 위한 IR(Image Rejection) Filter, 이미지 성분 억제와 IMD (Intermodulation Distortion) 특성 개선을 위한 IR(Image rejection) 믹서를 포함한다. 제작된 다채널 전단 수신기의 시험결과, 3.8 dB 이하의 잡음지수와 27 dB 이상의 변환이득을 가지며 입력신호 기준 -9.5 dBm 이상의 P1dB(1dB Gain Compression Point) 특성을 나타내었다.