• Title/Summary/Keyword: 3D camera

Search Result 38, Processing Time 0.109 seconds

A On-site Monitoring Device of Work-related Musculoskeletal Disorder Risk Based on 3D-Camera (3D 카메라 기반 직업성 근골격계 부담 작업 모니터링 장치)

  • Loh, Byoung Gook
    • Journal of the Korean Society of Safety
    • /
    • v.30 no.6
    • /
    • pp.110-116
    • /
    • 2015
  • A 3D camera-based on-site work-related musculoskeletal disorder risk assessment(WMDs) tool has been developed. The device consists of Kinect a 3D camera manufactured by Microsoft, a servo-motor, and a mobile robot. To complement inherent narrow field of view(FOV) of Kinect, Kinect is rotated according to PID servo-control algorithm by a servo-motor attached underneath, to track movement of a subject, producing skeleton-based motion data. With servo-control, full 360 degrees tracking of a test subject is possible by single Kinect. It was found from experimental tests that the proposed device can be successfully employed for on-site WMDs risk assessing tool.

An Analysis of Radiative Observation Environment for Korea Meteorological Administration (KMA) Solar Radiation Stations based on 3-Dimensional Camera and Digital Elevation Model (DEM) (3차원 카메라와 수치표고모델 자료에 따른 기상청 일사관측소의 복사관측환경 분석)

  • Jee, Joon-Bum;Zo, Il-Sung;Lee, Kyu-Tae;Jo, Ji-Young
    • Atmosphere
    • /
    • v.29 no.5
    • /
    • pp.537-550
    • /
    • 2019
  • To analyze the observation environment of solar radiation stations operated by the Korea Meteorological Administration (KMA), we analyzed the skyline, Sky View Factor (SVF), and solar radiation due to the surrounding topography and artificial structures using a Digital Elevation Model (DEM), 3D camera, and solar radiation model. Solar energy shielding of 25 km around the station was analyzed using 10 m resolution DEM data and the skyline elevation and SVF were analyzed by the surrounding environment using the image captured by the 3D camera. The solar radiation model was used to assess the contribution of the environment to solar radiation. Because the skyline elevation retrieved from the DEM is different from the actual environment, it is compared with the results obtained from the 3D camera. From the skyline and SVF calculations, it was observed that some stations were shielded by the surrounding environment at sunrise and sunset. The topographic effect of 3D camera is therefore more than 20 times higher than that of DEM throughout the year for monthly accumulated solar radiation. Due to relatively low solar radiation in winter, the solar radiation shielding is large in winter. Also, for the annual accumulated solar radiation, the difference of the global solar radiation calculated using the 3D camera was 176.70 MJ (solar radiation with 7 days; suppose daily accumulated solar radiation 26 MJ) on an average and a maximum of 439.90 MJ (solar radiation with 17.5 days).

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

A Study on Stereoscopical Motion Graphic Production using Tracking Data from 3D Camera (3D카메라 트레킹데이터를 활용한 입체적 모션그래픽 제작방법 연구)

  • Lee, Junsang;Han, Soowhan;Lee, Imgeun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.232-233
    • /
    • 2015
  • 디지털 미디어의 발전으로 인해 모션그래픽의 구성요소들은 다양한 방식으로 제작되고 있다. 모션그래픽을 활용한 영상콘텐츠 제작기법은 새로운 이미지를 어떻게 제작하느냐에 따라 보다 창의적이고 효율적인 영상으로 표현되기도 한다. 모션그래픽의 활용도는 웹, 게임, 및 영화 방송 등 다양한 콘텐츠의 시각적 정보전달의 의미를 가지고 있다. 모션그래픽의 새롭고 실험적인 작업은 어떠한 제작방식에 따라 그 결과는 아주 다르게 나타난다. 또한 영상제작에 대한 프로세스는 새로운 뉴미디어의 기술의 결합으로 영상콘텐츠의 창의적인 작품으로 된다. 따라서 본 논문은 모션그래픽의 제작방식에서 실사카메라의 움직임을 가상카메라로 트래킹하여 입체적인 모션그래픽을 구현하는 제작방법을 제시한다.

  • PDF

Quantitative Analysis of Lower Nose and Upper Lip Asymmetry in Patient with Unilateral Cleft Lip Nose Deformity using 3D camera (3D camera를 이용한 일측성 구순비변형환자에서의 비하부 및 상구순 비대칭의 정량적 분석)

  • Oh, Tae suk;Koh, Kyung suk;Kim, Tae gon
    • Archives of Plastic Surgery
    • /
    • v.36 no.6
    • /
    • pp.702-706
    • /
    • 2009
  • Purpose: Analysis of lower nose and upper lip asymmetry in patients with unilateral cleft lip nose deformity has been proceeded through direct measurement and photo analysis. But there are limitation in presenting real image because of its 2 dimensional trait. The authors analyzed such an asymmetry using 3D VECTRA system (Canfield, NJ, USA) in quantitative way. Methods: In 25 Patients with unilateral cleft lip nose deformity(male 12, female 13, age ranging from 4 to 19), patients with right side deformity were 10 and left were 15. Analysis of asymmetry was proceeded through 3D VECTRA system. After taking 3 dimensional photo, alar area, upper lip area, nostril perimeter, nostril area, Cupid's bow length, nostril height and nostril width were measured. Correlation coefficient and inter data quotients were calculated. Results: In nostril perimeter, maximal difference of cleft side and non - cleft side was 39.3%, asymmetric quotient Qasy = Qcl/Qncl(Qcl, value of cleft side; Qncl, value of non - cleft side) was ranged from 0.84 to 1.85 and in seven cases the length of cleft side was smaller. In nostril area, maximal difference was 69.6% and in 13 cases cleft side was smaller. In lower nasal area, maximal difference was 37.2% asymmetric quotient Qasy = Qcl/Qncl was ranged from 0.47 to 2.03 and in 20 cases cleft side was smaller. The correlation coefficients of nostril perimeter and area were 0.8345. Conclusion: Using 3D VECTRA system, the authors can measure nostril perimeter and lower nasal area that could not been measured with previous methods. Asymmetry of midface was analyzed through area comparison in quantitative way. Futhermore, post operative change can be measured in quantitative method.

New Implementation and Test Methodology for Single Lens Stereoscopic 3D Camera System (새로운 단일렌즈 양안식 입체영상 카메라의 구현과 테스트 방법)

  • Park, Sangil;Yoo, Sunggeun;Lee, Youngwha
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.569-577
    • /
    • 2014
  • From the year 2009, 3D Stereoscopic movies and TV have been spotlighted after the huge success of a movie called "AVATAR". Moreover, most of 3D movies & contents are created by mixing real-life shots & virtual animated pictures, such as "Robocop 3", "Transformer 4" as shown in 2014. However, the stereoscopic 3D video film shooting with a traditional stereoscopic rig camera system, takes much more time to set the rig system and adjust the system setting for proper film making which necessarily resulting in bigger cost. In fact, these problems have depreciated the success of Avatar as decreasing demand for 3D stereoscopic video shooting. In this paper, inherent problems of traditional stereoscopic rig camera system are analyzed, and as a solution for the problems, a novel implementations of single-lens optical stereoscopic 3D camera system is suggested. The new system can be implemented to a technology for separating two lights when even those lights passing through in the same optical axis. The system has advantages of adjusting the setting and taking video compared with traditional stereoscopic 3D rig systems. Furthermore, this system can acquire comfortable 3D stereoscopic video because of the good characteristics of geometrical errors. This paper will be discussed the single-lens stereoscopic 3D camera system using rolling shutters, it will be tested geometrical errors of this system. Lastly, other types of single lens stereoscopic 3D camera system are discussed to develop the promising future of this system.

Analysis of Observation Environment with Sky Line and Skyview Factor using Digital Elevation Model (DEM), 3-Dimensional Camera Image and Radiative Transfer Model at Radiation Site, Gangneung-Wonju National University (수치표고모델, 3차원 카메라이미지자료 및 복사모델을 이용한 Sky Line과 Skyview Factor에 따른 강릉원주대학교 복사관측소 관측환경 분석)

  • Jee, Joon-Bum;Zo, Il-Sung;Kim, Bu-Yo;Lee, Kyu-Tae;Jang, Jeong-Pil
    • Atmosphere
    • /
    • v.29 no.1
    • /
    • pp.61-74
    • /
    • 2019
  • To investigate the observational environment, sky line and skyview factor (SVF) are calculated using a digital elevation model (DEM; 10 m spatial resolution) and 3 dimensional (3D) sky image at radiation site, Gangneung-Wonju National University (GWNU). Solar radiation is calculated using GWNU solar radiation model with and without the sky line and the SVF retrieved from the 3D sky image and DEM. When compared with the maximum sky line elevation from Skyview, the result from 3D camera is higher by $3^{\circ}$ and that from DEM is lower by $7^{\circ}$. The SVF calculated from 3D camera, DEM and Skyview is 0.991, 0.998, and 0.993, respectively. When the solar path is analyzed using astronomical solar map with time, the sky line by 3D camera shield the direct solar radiation up to $14^{\circ}$ with solar altitude at winter solstice. The solar radiation is calculated with minutely, and monthly and annual accumulated using the GWNU model. During the summer and winter solstice, the GWNU radiation site is shielded from direct solar radiation by the west mountain 40 and 60 minutes before sunset, respectively. The monthly difference between plane and real surface is up to $29.18M\;m^{-2}$ with 3D camera in November, while that with DEM is $4.87M\;m^{-2}$ in January. The difference in the annual accumulated solar radiation is $208.50M\;m^{-2}$ (2.65%) and $47.96M\;m^{-2}$ (0.63%) with direct solar radiation and $30.93M\;m^{-2}$ (0.58%) and $3.84M\;m^{-2}$ (0.07%) with global solar radiation, respectively.

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.