• Title/Summary/Keyword: 자동영상정합

Search Result 194, Processing Time 0.023 seconds

Topographic Survey at Small-scale Open-pit Mines using a Popular Rotary-wing Unmanned Aerial Vehicle (Drone) (보급형 회전익 무인항공기(드론)를 이용한 소규모 노천광산의 지형측량)

  • Lee, Sungjae;Choi, Yosoon
    • Tunnel and Underground Space
    • /
    • v.25 no.5
    • /
    • pp.462-469
    • /
    • 2015
  • This study carried out a topographic survey at a small-scale open-pit limestone mine in Korea (the Daesung MDI Seoggyo office) using a popular rotary-wing unmanned aerial vehicle (UAV, Drone, DJI Phantom2 Vision+). 89 sheets of aerial photos could be obtained as a result of performing an automatic flight for 30 minutes under conditions of 100m altitude and 3m/s speed. A total of 34 million cloud points with X, Y, Z-coordinates was extracted from the aerial photos after data processing for correction and matching, then an orthomosaic image and digital surface model with 5m grid spacing could be generated. A comparison of the X, Y, Z-coordinates of 5 ground control points measured by differential global positioning system and those determined by UAV photogrammetry revealed that the root mean squared errors of X, Y, Z-coordinates were around 10cm. Therefore, it is expected that the popular rotary-wing UAV photogrammetry can be effectively utilized in small-scale open-pit mines as a technology that is able to replace or supplement existing topographic surveying equipments.

Rotation Errors of Breast Cancer on 3D-CRT in TomoDirect (토모다이렉트 3D-CRT을 이용한 유방암 환자의 회전 오차)

  • Jung, Jae Hong;Cho, Kwang Hwan;Moon, Seong Kwon;Bae, Sun Hyun;Min, Chul Kee;Kim, Eun Seog;Yeo, Seung-Gu;Choi, Jin Ho;Jung, Joo-Yong;Choe, Bo Young;Suh, Tae Suk
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.6-11
    • /
    • 2015
  • The purpose of this study was to analyze the rotational errors of roll, pitch, and yaw in the whole breast cancer treated by the three-dimensional radiation therapy (3D-CRT) using TomoDirect (TD). Twenty-patient previously treated with TD 3D-CRT was selected. We performed a retrospective clinical analysis based on 80 images of megavoltage computed tomography (MVCT) including the systematic and random variation with patient setup errors and treatment setup margin (mm). In addition, a rotational error (degree) for each patient was analyzed using the automatic image registration. The treatment margin of X, Y, and Z directions were 4.2 mm, 6.2 mm, and 6.4 mm, respectively. The mean value of the rotational error for roll, pitch, and yaw were $0.3^{\circ}$, $0.5^{\circ}$, $0.1^{\circ}$, and all of systematic and random error was within $1.0^{\circ}$. The errors of patient positioning with the Y and Z directions have generally been mainly higher than the X direction. The percentage in treatment fractions in less than $2^{\circ}$ at roll, pitch, and yaw are 95.1%, 98.8%, and 97.5%, respectively. However, the edge of upper and lower (i.e., bottom) based on the center of therapy region (point) will quite a possibility that it is expected to twist even longer as the length of treatment region. The patient-specific characters should be considered for the accuracy and reproducibility of treatment and it is necessary to confirm periodically the rotational errors, including patient repositioning and repeating MVCT scan.

Study of the UAV for Application Plans and Landscape Analysis (UAV를 이용한 경관분석 및 활용방안에 관한 기초연구)

  • Kim, Seung-Min
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.32 no.3
    • /
    • pp.213-220
    • /
    • 2014
  • This is the study to conduct the topographical analysis using the orthophotographic data from the waypoint flight using the UAV and constructed the system required for the automatic waypoint flight using the multicopter.. The results of the waypoint photographing are as follows. First, result of the waypoint flight over the area of 9.3ha, take time photogrammetry took 40 minutes in total. The multicopter have maintained the certain flight altitude and a constant speed that the accurate photographing was conducted over the waypoint determined by the ground station. Then, the effect of the photogrammetry was checked. Second, attached a digital camera to the multicopter which is lightweight and low in cost compared to the general photogrammetric unmanned airplane and then used it to check its mobility and economy. In addition, the matching of the photo data, and production of DEM and DXF files made it possible to analyze the topography. Third, produced the high resolution orthophoto(2cm) for the inside of the river and found out that the analysis is possible for the changes in vegetation and topography around the river. Fourth, It would be used for the more in-depth research on landscape analysis such as terrain analysis and visibility analysis. This method may be widely used to analyze the various terrains in cities and rivers. It can also be used for the landscape control such as cultural remains and tourist sites as well as the control of the cultural and historical resources such as the visibility analysis for the construction of DSM.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.