• Title/Summary/Keyword: Reality Capture

Search Result 91, Processing Time 0.023 seconds

Open-Source Based Smartphone Image Analysis for Pavement Crack Detection (노면 균열 검출을 위한 오픈소스 기반 스마트폰 영상해석)

  • Kim Tae-Hyun;Lee Yong-Chang
    • Journal of Urban Science
    • /
    • v.13 no.1
    • /
    • pp.43-52
    • /
    • 2024
  • This study evaluates the feasibility and accuracy of using smartphones for road crack detection through open-source and commercial image analysis tools. High-resolution images were captured with Galaxy and iPhone smartphones, and their accuracy was enhanced using ground control points (GCPs) determined by Network RTK surveying. The study utilized Reality Capture and Pix4DMatic for image analysis, comparing their results with actual measurements. Pix4DMatic effectively converted smartphone images into precise 3D models, detecting even small cracks with minimal error. The findings indicate that smartphones offer a cost-effective and efficient solution for road maintenance, providing high precision and convenience without the need for frequent site visits. Future research should validate this method under various conditions and enhance data collection and analysis automation.

  • PDF

Development of a Real Time Three-Dimensional Motion Capture System by Using Single PSD Unit (단일 PSD를 이용한 실시간 3차원 모션캡쳐 시스템 개발)

  • Jo, Yong-Jun;Oh, Choon-Suk;Ryu, Young-Kee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.11
    • /
    • pp.1074-1080
    • /
    • 2006
  • Motion capture systems are gaining popularity in entertainment, medicine, sports, education, and industry, with animation and gaming applications for entertainment taking the lead. A wide variety of systems are available for motion capture, but most of them are complicated and expensive. In the general class of optical motion capture, two or more optical sensors are needed to measure the 3D positions of the markers attached to the body. Recently, a 3D motion capture system using two Position Sensitive Detector (PSD) optical sensors was introduced to capture high-speed motion of an active infrared LED marker. The PSD-based system, however, is limited by a geometric calibration procedure for two PSD sensor modules that is too difficult for common customers. In this research, we have introduced a new system that used a single PSD sensor unit to obtain 3D positions of active IR LED-based markers. This new system is easy to calibrate and inexpensive.

Correlation Between Knee Muscle Strength and Maximal Cycling Speed Measured Using 3D Depth Camera in Virtual Reality Environment

  • Kim, Ye Jin;Jeon, Hye-seon;Park, Joo-hee;Moon, Gyeong-Ah;Wang, Yixin
    • Physical Therapy Korea
    • /
    • v.29 no.4
    • /
    • pp.262-268
    • /
    • 2022
  • Background: Virtual reality (VR) programs based on motion capture camera are the most convenient and cost-effective approaches for remote rehabilitation. Assessment of physical function is critical for providing optimal VR rehabilitation training; however, direct muscle strength measurement using camera-based kinematic data is impracticable. Therefore, it is necessary to develop a method to indirectly estimate the muscle strength of users from the value obtained using a motion capture camera. Objects: The purpose of this study was to determine whether the pedaling speed converted using the VR engine from the captured foot position data in the VR environment can be used as an indirect way to evaluate knee muscle strength, and to investigate the validity and reliability of a camera-based VR program. Methods: Thirty healthy adults were included in this study. Each subject performed a 15-second maximum pedaling test in the VR and built-in speedometer modes. In the VR speedometer mode, a motion capture camera was used to detect the position of the ankle joints and automatically calculate the pedaling speed. An isokinetic dynamometer was used to assess the isometric and isokinetic peak torques of knee flexion and extension. Results: The pedaling speeds in VR and built-in speedometer modes revealed a significantly high positive correlation (r = 0.922). In addition, the intra-rater reliability of the pedaling speed in the VR speedometer mode was good (ICC [intraclass correlation coefficient] = 0.685). The results of the Pearson correlation analysis revealed a significant moderate positive correlation between the pedaling speed of the VR speedometer and the peak torque of knee isokinetic flexion (r = 0.639) and extension (r = 0.598). Conclusion: This study suggests the potential benefits of measuring the maximum pedaling speed using 3D depth camera in a VR environment as an indirect assessment of muscle strength. However, technological improvements must be followed to obtain more accurate estimation of muscle strength from the VR cycling test.

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.

Augmented Reality and Virtual Reality Technology Trend for Unmanned Arial Vehicles (무인항공기를 위한 증강/가상현실 기술 동향)

  • Bang, J.S.;Lee, Y.H.;Lee, H.J.;Lee, G.H.
    • Electronics and Telecommunications Trends
    • /
    • v.32 no.5
    • /
    • pp.117-126
    • /
    • 2017
  • With the advances of high-performance, lightweight hardware components and control software, unmanned aerial vehicles (UAVs) have expanded in terms of use, not only for military applications but also for civilian applications. To complete their task at a remote location, UAVs are generally equipped with a camera, and various sensors and types of hardware devices can be attached according to the particular task. When UAVs capture video images and transmit them into the user's interface, augmented reality (AR) and virtual reality (VR) technologies as a user interface may have advantages in controlling the UAV. In this paper, we review AR and VR applications for UAVs and discuss their future directions.

A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing (하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구)

  • Jang, Yong Hun;Chang, Min Hyuk;Jung, Ha Hyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.

Case Study of Short Animation with Facial Capture Technology Using Mobile

  • Jie, Gu;Hwang, Juwon;Choi, Chulyoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.56-63
    • /
    • 2020
  • The Avengers film produced by Marvel Comics shows visual effects that were impossible to produce in the past. Companies that produce film special effects were initially equipped with large personnel and equipment, but technology is gradually evolving to be feasible for smaller companies that do not have high-priced equipment and a large workforce. The development of hardware and software is becoming increasingly available to the general public as well as to experts. Equipment and software which were difficult for individuals to purchase before quickly popularized high-performance computers as the game industry developed. The development of the cloud has been the driving force behind software costs. As augmented reality (AR) performance of mobile devices improves, advanced technologies such as motion tracking and face recognition technology are no longer implemented by expensive equipment. Under these circumstances, after implementing mobile-based facial capture technology in animation projects, we have identified the pros and the cons and suggest better solutions to improve the problem.

Potential of an Interactive Metaverse Platform for Safety Education in Construction

  • Yoo, Taehan;Lee, Dongmin;Yang, Jaehoon;Kim, Dohyung;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.516-524
    • /
    • 2022
  • The construction industry is considered the most hazardous industry globally. Therefore, safety education is crucial for raising the safety awareness of construction workers working at construction sites and creating a safe working environment. However, the current safety education method and tools cannot provide trainees with realistic and practical experiences that might help better safety awareness in practice. A metaverse, a real-time network of 3D virtual worlds focused on social connection, was created for more interactive communication, collaboration, and coordination between users. Several previous studies have noted that the metaverse has excellent potential for improved safety education performance, but its required functions and practical applications have not been thoroughly researched. In order to fill the research gap, this paper reviewed the potential benefits of a metaverse based on the current research and suggested its application for safety education purposes. This paper scrutinized the metaverse's key functions, particularly its information and knowledge sharing function and reality capture function. Then, the authors created a metaverse prototype based on the two key functions described above. The main contribution of this paper is reviewing the potential benefits of a metaverse for safety education. A realistic and feasible metaverse platform should be developed in future studies, and its impact on safety education should be quantitatively verified.

  • PDF

Comparative Analysis of Facial Animation Production by Digital Actors - Keyframe Animation and Mobile Capture Animation

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.176-182
    • /
    • 2024
  • Looking at the recent game market, classic games released in the past are being re-released with high-quality visuals, and users are generally satisfied. It can be said that the realization of realistic digital actors, which was not possible in the past, is now becoming a reality. Epic Games launched the MetaHuman Creator website in September 2021, allowing anyone to easily create realistic human characters. Since then, the number of animations created using MetaHumans has been increasing. As the characters become more realistic, the movement and expression animations expected by the audience must also be convincingly realized. Until recently, traditional methods were the primary approach for producing realistic character animations. For facial animation, Epic Games introduced an improved method on the Live Link app in 2023, which provides the highest quality among mobile-based techniques. In this context, this paper compares the results of animation produced using both keyframe facial capture and mobile-based capture. After creating an emotional expression animation with four sentences, the results were compared using Unreal Engine. While the facial capture method is more natural and easier to use, the precise and exaggerated expressions possible with the keyframe method cannot be overlooked, suggesting that a hybrid approach using both methods will likely continue for the foreseeable future.

Muscular Activity Analysis in Lower Limbs from Motion and Visual Information of Luge Simulator based Virtual Reality (가상현실 루지 시뮬레이터의 동작과 영상정보별 인체 근육활성도 분석)

  • Kang, Seung Rok;Kim, Ui Ryung;Kim, Kyung;Bong, Hyuk;Kwon, Tae Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.9
    • /
    • pp.825-831
    • /
    • 2015
  • In this paper, capture motion and visual information from a virtual reality luge simulator to analyze muscular activity in the lower limbs. The Luge Simulator consists of a motion platform with a pneumatic module for weight distribution. We recruited luge athletes and healthy subjects and made real-time surface EMG measurements to estimate the muscular activity in the lower limbs according to the motion protocol of a simulator, and a test was conducted for each subject. The results indicated that the rectus femoris had the highest muscular activity according to the level of the slope and velocity of the luge. The soleus muscle showed a high level of activity during a turn in the luge according to the direction. We found that the development of a virtual reality sports simulator based on physical reaction results could bring positive effects to optimize reality and human cenesthesia.