• Title/Summary/Keyword: virtual camera

Search Result 477, Processing Time 0.028 seconds

Development of Automatic Accidents Detection Algorithm Using Image Sequence (영상을 이용한 자동 유고 검지 알고리즘 개발)

  • Lee, Bong-Keun;Lim, Joong-Seon;Han, Min-Hong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.127-134
    • /
    • 2003
  • This paper is intended to develop an algorithm for automatic detection of traffic accidents using image sequences. This algorithm is designed for detecting stopped vehicles traffic accidents, break down, illegal stop in the road shoulder - on the range of camera view. Virtual traps are set on accident-prone spots. We analyze the changes in gray levels of pixels on the virtual traps which represent the motion of vehicles on the corresponding spots. We verify the proposed algorithm by simulating some situations and checking if it detect them correctly.

A Study of Changes in Production by Domestic Broadcasters Using Virtual Studio

  • Lee, Jun-Sang;Park, Sung-Dae;Kim, Chee-Yong
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.1
    • /
    • pp.117-123
    • /
    • 2011
  • This paper is for investigation and analysis of how a visual studio is widely used by domestic broadcasting companies and broadcasting companies' production environment. The history of a visual studio goes back to start of computer graphic. A visual studio is a way to produce a program using graphic sets created by a computer rather than actual setting, and it allows to express beyond limitations of actual studio settings and various imagery visual effects can be created by a computer. A visual studio can create 3-dimensional graphic and these graphics can be inter-locked with actual camera images to make visual spaces for various programs. These flows are aiming to achieve very natural image which is hard to distinguish it is artificially created rather than just to produce programs with simple image synthesis, and this paper analyzes the producing changes of domestic broadcasting visual studios as well as its usage and suggests the idealist direction for developing the production.

A Study of the Adaptation of 2-Dimensional Hair-Style Computer Simulation and Prospects of the 3D System (2D 헤어스타일 시뮬레이션 현황과 3D 시스템 도입방향에 관한 연구)

  • HwangBo, Yun;Ha, Kyu-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.7
    • /
    • pp.221-229
    • /
    • 2008
  • The development of computer and multimedia brought out new technology, that is, virtual reality. Computer simulation adaptation among the technologies of the virtual reality is spreaded into air service, motor vehicle, medical science, sports, education, even fashion industry. This study look into 2-dimensional hair-style computer simulation system which is started to common use nowadays and the 3-dimensional system which is under the development. And this study proposed several problems such as heavy 3D system booth and the low price but low qualified camera in order to commercialize the 3D system. This study also suggest several alternative, for instance, the change from object photography method to panoramic photography method, the substitute by middle or high end and high qualified camera.

  • PDF

Game Engine Driven Synthetic Data Generation for Computer Vision-Based Construction Safety Monitoring

  • Lee, Heejae;Jeon, Jongmoo;Yang, Jaehun;Park, Chansik;Lee, Dongmin
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.893-903
    • /
    • 2022
  • Recently, computer vision (CV)-based safety monitoring (i.e., object detection) system has been widely researched in the construction industry. Sufficient and high-quality data collection is required to detect objects accurately. Such data collection is significant for detecting small objects or images from different camera angles. Although several previous studies proposed novel data augmentation and synthetic data generation approaches, it is still not thoroughly addressed (i.e., limited accuracy) in the dynamic construction work environment. In this study, we proposed a game engine-driven synthetic data generation model to enhance the accuracy of the CV-based object detection model, mainly targeting small objects. In the virtual 3D environment, we generated synthetic data to complement training images by altering the virtual camera angles. The main contribution of this paper is to confirm whether synthetic data generated in the game engine can improve the accuracy of the CV-based object detection model.

  • PDF

Virtual Angioscopy for Diagnosis of Carotid Artery Stenosis (경동맥 협착증 진단을 위한 가상혈관경)

  • 김도연;박종원
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.821-828
    • /
    • 2003
  • The virtual angioscopy was implemented using MR angiography image of carotid artery Inside of the carotid artery is one of the body region not accessible by real optical endoscopy but can be visualized with virtual endoscopy. In order to determine the navigation path, we segmented the common carotid artery and internal carotid artery from the MR angiography image. We used the coordinates as a navigation path for virtual camera that were calculated from medial axis transformation. We used the perspective projection and marching cube algorithm to render the surface from volumetric MRA image data. A stroke occurs when brain cells die because of decreased blood flow to the brain. The carotid artery is the primary blood vessel that supplies the blood flow to the brain. Therefore, the carotid artery stenosis is the primary reason of stroke. The virtual angioscopy is highly recommended as a diagnosis tool with which the specific Place of stenosis can be identified and the degree of stenosis can be measured qualitatively. Also, the virtual angioscopy can be used as an education and training tool for endoscopist and radiologist.

Learners' Responses to a Virtual Cadaver Dissection Nerve Course in the COVID Era: A Survey Study

  • Lisiecki, Jeffrey L.;Johnson, Shepard Peir;Grant, David;Chung, Kevin C.
    • Archives of Plastic Surgery
    • /
    • v.49 no.5
    • /
    • pp.676-682
    • /
    • 2022
  • Background Virtual education is an evolving method for teaching medical learners. During the coronavirus disease 2019 pandemic, remote learning has provided a replacement for conferences, lectures, and meetings, but has not been described as a method for conducting a cadaver dissection. We aim to demonstrate how learners perceive a virtual cadaver dissection as an alternative to live dissection. Methods A virtual cadaver dissection was performed to demonstrate several upper extremity nerve procedures. These procedures were livestreamed as part of an educational event with multimedia and interactive audience questions. Participants were queried both during and after the session regarding their perceptions of this teaching modality. Results Attendance of a virtual dissection held for three plastic surgery training institutions began at 100 and finished with 70 participants. Intrasession response rates from the audience varied between 68 and 75%, of which 75% strongly agreed that they were satisfied with the virtual environment. The audience strongly agreed or agreed that the addition of multimedia captions (88%), magnified video loupe views (82%), and split-screen multicast view (64%) was beneficial. Postsession response rate was 27%, and generally reflected a positive perspective about the content of the session. Conclusions Virtual cadaver dissection is an effective modality for teaching surgical procedures and can be enhanced through technologies such as video loupes and multiple camera perspectives. The audience viewed the virtual cadaver dissection as a beneficial adjunct to surgical education. This format may also make in-person cadaver courses more effective by improving visualization and allowing for anatomic references to be displayed synchronously.

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

Development of a Project Schedule Simulation System by a Synchronization Methodology of Active nD Object and Real Image of Construction Site

  • Kim, Hyeon Seung;Shin, Jong Myeong;Park, Sang Mi;Kang, Leen Seok
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.344-348
    • /
    • 2015
  • The image data of the web camera is used to identify the construction status of the site in a remote office and it can be used for safety management. This study develops a construction schedule simulation system based on the active nD object linked with real image data of web camera from the construction site. The progress control method by 4D object uses a method that the progress of each activity is represented with different colors by progress status. Since this method is still based on a virtual reality object, it is less realistic description for practical engineers. Therefore, in order to take advantage of BIM more realistic, the real image of actual construction status and 4D object of planned schedule in a data date should be compared in a screen simultaneously. Those methodologies and developed system are verified in a case project where a web camera is installed for the verification of the system.

  • PDF

Development of Virtual Makeup Tool based on Mobile Augmented Reality

  • Song, Mi-Young;Kim, Young-Sun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.127-133
    • /
    • 2021
  • In this study, an augmented reality-based make-up tool was built to analyze the user's face shape based on face-type reference model data and to provide virtual makeup by providing face-type makeup. To analyze the face shape, first recognize the face from the image captured by the camera, then extract the features of the face contour area and use them as analysis properties. Next, the feature points of the extracted face contour area are normalized to compare with the contour area characteristics of each face reference model data. Face shape is predicted and analyzed using the distance difference between the feature points of the normalized contour area and the feature points of the each face-type reference model data. In augmented reality-based virtual makeup, in the image input from the camera, the face is recognized in real time to extract the features of each area of the face. Through the face-type analysis process, you can check the results of virtual makeup by providing makeup that matches the analyzed face shape. Through the proposed system, We expect cosmetics consumers to check the makeup design that suits them and have a convenient and impact on their decision to purchase cosmetics. It will also help you create an attractive self-image by applying facial makeup to your virtual self.

Real-Time Image-Based Relighting for Tangible Video Teleconference (실감화상통신을 위한 실시간 재조명 기술)

  • Ryu, Sae-Woon;Parka, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.807-810
    • /
    • 2009
  • This paper deals with a real-time image based relighting system for tangible video teleconference. The proposed image based relighting system renders the extracted human object using the virtual environmental images. The proposed system can homogenize virtually the lighting environments of remote users on the video teleconference, or render the humans like they are in the virtual places. To realize the video teleconference, the paper obtains the 3D object models of users in real-time using the controlled lighting system. In this paper, we use single color camera and synchronized two directional flash lights. Proposed system generates pure shading images using on and off flash images subtraction. One pure shading reflectance map generates a directional normal map from multiplication of each reflectance map and basic normal vector map. Each directional basic normal map is generated by inner vector calculation of incident light vector and camera viewing vector. And the basic normal vector means a basis component of real surface normal vector. The proposed system enables the users to immerse video teleconference just as they are in the virtual environments.