• Title/Summary/Keyword: 3D capturing

Search Result 142, Processing Time 0.033 seconds

Optimised ML-based System Model for Adult-Child Actions Recognition

  • Alhammami, Muhammad;Hammami, Samir Marwan;Ooi, Chee-Pun;Tan, Wooi-Haw
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.929-944
    • /
    • 2019
  • Many critical applications require accurate real-time human action recognition. However, there are many hurdles associated with capturing and pre-processing image data, calculating features, and classification because they consume significant resources for both storage and computation. To circumvent these hurdles, this paper presents a recognition machine learning (ML) based system model which uses reduced data structure features by projecting real 3D skeleton modality on virtual 2D space. The MMU VAAC dataset is used to test the proposed ML model. The results show a high accuracy rate of 97.88% which is only slightly lower than the accuracy when using the original 3D modality-based features but with a 75% reduction ratio from using RGB modality. These results motivate implementing the proposed recognition model on an embedded system platform in the future.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Producing a Virtual Object with Realistic Motion for a Mixed Reality Space

  • Daisuke Hirohashi;Tan, Joo-Kooi;Kim, Hyoung-Seop;Seiji Ishikawa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.2-153
    • /
    • 2001
  • A technique is described for producing a virtual object with realistic motion. A 3-D human motion model is obtained by applying a developed motion capturing technique to a real human in motion. Factorization method is a technique for recovering 3-D shape of a rigid object from a single video image stream without using camera parameters. The technique is extended for recovering 3-D human motions. The proposed system is composed of three fixed cameras which take video images of a human motion. Three obtained image sequences are analyzed to yield measurement matrices at individual sampling times, and they are merged into a single measurement matrix to which the factorization is applied and the 3-D human motion is recovered ...

  • PDF

Jitter Correction of the Face Motion Capture Data for 3D Animation

  • Lee, Junsang;Han, Soowhan;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.39-45
    • /
    • 2015
  • Along with the advance of digital technology, various methods are adopted for capturing the 3D animating data. Especially, in 3D animation production market, the motion capture system is widely used to make films, games, and animation contents. The technique quickly tracks the movements of the actor and translate the data to use as animating character's motion. Thus the animation characters are able to mimic the natural motion and gesture, even face expression. However, the conventional motion capture system needs tricky conditions, such as space, light, number of camera etc. Furthermore the data acquired from the motion capture system is frequently corrupted by noise, drift and surrounding environment. In this paper, we introduce the post production techniques to stabilizing the jitters of motion capture data from the low cost handy system based on Kinect.

An implementation of 2D/3D Complex Optical System and its Algorithm for High Speed, Precision Solder Paste Vision Inspection (솔더 페이스트의 고속, 고정밀 검사를 위한 이차원/삼차원 복합 광학계 및 알고리즘 구현)

  • 조상현;최흥문
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.139-146
    • /
    • 2004
  • A 2D/3D complex optical system and its vision inspection algerian is proposed and implemented as a single probe system for high speed, precise vision inspection of the solder pastes. One pass un length labeling algorithm is proposed instead of the conventional two pass labeling algorithm for fast extraction of the 2D shape of the solder paste image from the recent line-scan camera as well as the conventional area-scan camera, and the optical probe path generation is also proposed for the efficient 2D/3D inspection. The Moire interferometry-based phase shift algerian and its optical system implementation is introduced, instead of the conventional laser slit-beam method, for the high precision 3D vision inspection. All of the time-critical algorithms are MMX SIMD parallel-coded for further speedup. The proposed system is implemented for simultaneous 2D/3D inspection of 10mm${\times}$10mm FOV with resolutions of 10 ${\mu}{\textrm}{m}$ for both x, y axis and 1 ${\mu}{\textrm}{m}$ for z axis. Experiments conducted on several nBs show that the 2D/3D inspection of an FOV, excluding an image capturing, results in high speed of about 0.011sec/0.01sec, respectively, after image capturing, with $\pm$1${\mu}{\textrm}{m}$ height accuracy.

2-D/3-D Combined Algorithm for Automatic Solder Paste Inspection (솔더 페이스트 자동검사를 위한 2-D/3-D 복합 알고리즘)

  • 조상현;이상윤;임쌍근;최흥문
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.173-176
    • /
    • 2002
  • In this paper, we present the combined 2-D and 3-D algorithms for automatic solder paste inspection. For automatic inspection, optical system for the combined inspection and driving unit is made. One-pass run length algorithm that has fast and efficient memory space is applied to the input image fur extracting solder paste patterns. The path of probe movement is then calculated for an automatic inspection. For a fast 3-D inspection, the phase shift algorithm based on Moire interferometry is also used. In addition, algorithms used in this paper are coded by $MMX^{TM}$. A probe system is manufactured to simultaneously inspect 2-D and 3-D for 10mm$\times$10mm field of view, with resolutions of 10 $\mu\textrm{m}$for both x, y axis and 17 $\mu\textrm{m}$for z axis, and then, experiments on several PCBs are conducted. The processing times of 2-D and 3-D, excluding an image capturing, is 0.039 sec and 0.047 sec, respectively. The credible result with $\pm$ 1$\mu\textrm{m}$uncertainty can be also achieved.

  • PDF

Distance and Entropy Based Image Viewpoint Selection for Accurate 3D Reconstruction with NeRF (NeRF의 정확한 3차원 복원을 위한 거리-엔트로피 기반 영상 시점 선택 기술)

  • Jinwon Choi;Chanho Seo;Junhyeok Choi;Sunglok Choi
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.98-105
    • /
    • 2024
  • This paper proposes a new approach with a distance-based regularization to the entropy applied to the NBV (Next-Best-View) selection with NeRF (Neural Radiance Fields). 3D reconstruction requires images from various viewpoints, and selecting where to capture these images is a highly complex problem. In a recent work, image acquisition was derived using NeRF's ray-based uncertainty. While this work was effective for evaluating candidate viewpoints at fixed distances from a camera to an object, it is limited when dealing with a range of candidate viewpoints at various distances, because it tends to favor selecting viewpoints at closer distances. Acquiring images from nearby viewpoints is beneficial for capturing surface details. However, with the limited number of images, its image selection is less overlapped and less frequently observed, so its reconstructed result is sensitive to noise and contains undesired artifacts. We propose a method that incorporates distance-based regularization into entropy, allowing us to acquire images at distances conducive to capturing both surface details without undesired noise and artifacts. Our experiments with synthetic images demonstrated that NeRF models with the proposed distance and entropy-based criteria achieved around 50 percent fewer reconstruction errors than the recent work.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Opportunity Capturing Strategy of Venture Company in the Context of Dominant Design Competition: focused on compare with hardware and software industry (지배적 디자인 경쟁 환경에서 벤처기업의 업종별 기회포착 전략에 관한 연구: 하드웨어와 소프트웨어 산업 비교를 중심으로)

  • Moon, Ji-Yong;Ko, Young-Hee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.10 no.2
    • /
    • pp.27-42
    • /
    • 2015
  • The aim of this research is to investigate the difference in the capturing opportunities for each type of venture companies in the industry undergoing competition of a dominant design and then figure out the reason why they can be successful. Existing studies on venture companies are focused on the way to enhance a firm's competencies by acquiring and combining its resources. However, it is important for startups which have a lack of resources and capabilities to capture the opportunity to survive by understanding a changing environment. This study is focused on opportunity capture and strategic response to a changing environment and attempts to select and observe startup companies which are able to capture the opportunity and enter the market in the industry undergoing dominant design competition. In order to find out its difference in different types of business, we select one case from hardware startups and the other from software startups. According to the result of this study, the hardware startup focuses on market extension by lowering their prices and the software startup strives to induce more users to participate by the universalization of enabling technology so as to extend and standardize their technology in the environment undergoing dominant design competition. This feature of environment leads the difference in the approach for successfully capturing opportunity and thus hardware firms need to recognize the opportunity with profit potential from relationship with a number of cooperative firms while software firms need to identify the opportunity for extension of enabling technology which can be used by many users.

  • PDF

3D Point Cloud Enhancement based on Generative Adversarial Network (생성적 적대 신경망 기반 3차원 포인트 클라우드 향상 기법)

  • Moon, HyungDo;Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1452-1455
    • /
    • 2021
  • Recently, point clouds are generated by capturing real space in 3D, and it is actively applied and serviced for performances, exhibitions, education, and training. These point cloud data require post-correction work to be used in virtual environments due to errors caused by the capture environment with sensors and cameras. In this paper, we propose an enhancement technique for 3D point cloud data by applying generative adversarial network(GAN). Thus, we performed an approach to regenerate point clouds as an input of GAN. Through our method presented in this paper, point clouds with a lot of noise is configured in the same shape as the real object and environment, enabling precise interaction with the reconstructed content.