• 제목/요약/키워드: 3D capturing

검색결과 142건 처리시간 0.024초

Cosmic Evolution of Disk Galaxies seen through Bars

  • Kim, Taehyun;Sheth, Kartik;Athanassoula, Lia;Bosma, Albert
    • 천문학회보
    • /
    • 제42권1호
    • /
    • pp.31.3-31.3
    • /
    • 2017
  • The presence of a bar in disk galaxies indicates that galaxies reached their dynamical maturity, and secular evolution has started to play key roles in the evolution of disk galaxies. Numerical simulations predicted that as a barred galaxy evolves, the bar becomes longer by capturing its immediate neighbor disk stars. We test the hypothesis by exploring bar lengths and measuring the light deficit around the bar at various redshift. Supplementing already classified barred galaxies in later type disk galaxies ($$T{\geq_-}2$$, Sheth et al. 2008), we classify barred galaxies among earlier type disk galaxies (T<2) up to z~0.8 using F814W images from the Cosmic Evolution Survey (COSMOS). We estimate the length of bars analytically for ~400 galaxies, and find that there is a slight decrease in bar length with redshift. We also find that longer bars show more prominent light deficit around the bar and this trend is stronger for nearby galaxies. Our results are consistent with the predictions from numerical simulations, and imply that the bar induced secular evolution is already in place since z~0.8. 

  • PDF

Magnetic Wireless Motion Capturing System and its Application for Jaw Tracking System and 3D Computer Input Device

  • Yabukami, S.;Arai, K.;Arai, K.I.;Tsuji, S.
    • Journal of Magnetics
    • /
    • 제8권1호
    • /
    • pp.70-73
    • /
    • 2003
  • We have developed a new tracking system of jaw movement. The system consists of two permanent NdFeB magnets and 32 elements of two-axial fluxgate sensor array, The two magnets are attached to head portion and front tooth. This system does not need any attachments of the head portion or mouth such as clutch or magnetic field sensor except magnets. The proposed system is applicable for five degree of freedom. Position accuracy within 2]m was achieved. We developed a 3D computer input device by using the above mentioned technique.

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.

PROTOTYPE AUTOMATIC SYSTEM FOR CONSTRUCTING 3D INTERIOR AND EXTERIOR IMAGE OF BIOLOGICAL OBJECTS

  • Park, T. H.;H. Hwang;Kim, C. S.
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.II
    • /
    • pp.318-324
    • /
    • 2000
  • Ultrasonic and magnetic resonance imaging systems are used to visualize the interior states of biological objects. These nondestructive methods have many advantages but too much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct some biological objects to get the interior and exterior information, constructing 3D image from the series of the sliced sectional images gives more useful information with relatively low cost. In this paper, PC based automatic 3D model generator was developed. The system was composed of three modules. One is the object handling and image acquisition module, which feeds and slices objects sequentially and maintains the paraffin cool to be in solid state and captures the sectional image consecutively. The second is the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last is the image processing and visualization module, which processes a series of acquired sectional images and generates 3D graphic model. The handling module was composed of the gripper, which grasps and feeds the object and the cutting device, which cuts the object by moving cutting edge forward and backward. Sliced sectional images were acquired and saved in the form of bitmap file. The 3D model was generated to obtain the volumetric information using these 2D sectional image files after being segmented from the background paraffin. Once 3-D model was constructed on the computer, user could manipulate it with various transformation methods such as translation, rotation, scaling including arbitrary sectional view.

  • PDF

Development of Automatic System for 3D Visualization of Biological Objects

  • Choi, Tae Hyun;Hwnag, Heon;Kim, Chul Su
    • Agricultural and Biosystems Engineering
    • /
    • 제1권2호
    • /
    • pp.95-99
    • /
    • 2000
  • Nondestructive methods such as ultrasonic and magnetic resonance imaging systems have many advantages but still much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct some biological objects to get interior and exterior informations, constructing 3D image form a series of slices sectional images gives more useful information with relatively low cost. In this paper, a PC based automatic 3D model generator was developed. The system was composed of three modules. The first module was the object handling and image acquisition module, which fed and sliced the object sequentially and maintains the paraffine cool to be in solid state and captures the sectional image consecutively. The second one was the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last was the image processing and visualization module, which processed a series of acquired sectional images and generated 3D volumetric model. Handling module was composed of the gripper, which grasped and fed the object and the cutting device, which cuts the object by moving cutting edge forward and backward. sliced sectional images were acquired and saved in a form of bitmap file. 2D sectional image files were segmented from the background paraffine and utilized to generate the 3D model. Once 3-D model was constructed on the computer, user could manipulated it with various transformation methods such as translation, rotation, scaling including arbitrary sectional view.

  • PDF

Structural reliability analysis using temporal deep learning-based model and importance sampling

  • Nguyen, Truong-Thang;Dang, Viet-Hung
    • Structural Engineering and Mechanics
    • /
    • 제84권3호
    • /
    • pp.323-335
    • /
    • 2022
  • The main idea of the framework is to seamlessly combine a reasonably accurate and fast surrogate model with the importance sampling strategy. Developing a surrogate model for predicting structures' dynamic responses is challenging because it involves high-dimensional inputs and outputs. For this purpose, a novel surrogate model based on cutting-edge deep learning architectures specialized for capturing temporal relationships within time-series data, namely Long-Short term memory layer and Transformer layer, is designed. After being properly trained, the surrogate model could be utilized in place of the finite element method to evaluate structures' responses without requiring any specialized software. On the other hand, the importance sampling is adopted to reduce the number of calculations required when computing the failure probability by drawing more relevant samples near critical areas. Thanks to the portability of the trained surrogate model, one can integrate the latter with the Importance sampling in a straightforward fashion, forming an efficient framework called TTIS, which represents double advantages: less number of calculations is needed, and the computational time of each calculation is significantly reduced. The proposed approach's applicability and efficiency are demonstrated through three examples with increasing complexity, involving a 1D beam, a 2D frame, and a 3D building structure. The results show that compared to the conventional Monte Carlo simulation, the proposed method can provide highly similar reliability results with a reduction of up to four orders of magnitudes in time complexity.

Real-time Implementation of Character Movement by Floating Hologram based on Depth Video

  • Oh, Kyoo-jin;Kwon, Soon-kak
    • Journal of Multimedia Information System
    • /
    • 제4권4호
    • /
    • pp.289-294
    • /
    • 2017
  • In this paper, we implement to make the character content with the floating hologram. The floating hologram is the one of hologram techniques for projecting the 2D image to represent the 3D image in the air using the glass panel. The floating hologram technique is easy to apply and is used in exhibitions, corporate events, and advertising events. This paper uses both the depth information and the unreal engine for the floating hologram. Simulation results show that this method can make the character content to follow the movement of the user in the real time by capturing the depth video.

깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법 (3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor)

  • 성만규
    • 한국멀티미디어학회논문지
    • /
    • 제15권6호
    • /
    • pp.827-836
    • /
    • 2012
  • 키넥트의 성공적인 등장 이후 이 센서를 이용하여 사용자의 아바타에 해당하는 3차원 캐릭터의 움직임을 제어하는 많은 인터액티브 콘텐츠가 제작되었다. 하지만, 키넥트의 특성 상 사용자는 키넥트를 정면으로 바라보아야 하며, 모션 또한 제자리에서 수행할 수 있는 동작 정도만으로 국한되었다. 이 단점은 게임에서 가장 중요한 요구기능 중 하나인 가상공간 내비게이션을 수행하지 못하게 하는 근본적인 이유가 되었다. 본 논문은 이와 같은 단점을 해결하기 위한 새로운 방법을 제안한다. 두 단계로 이루어진 본 방법은 첫 번째 단계로서 사용자의 내비게이션 의도를 파악하기 위해 제자리 걷기 동작 제스처인식을 수행한다. 내비게이션 의도가 파악되면, 다음 단계에 현재 제자리 걷기동작을 상체와 하체 모션으로 자동으로 분리한 후, 미리 입력 받은 하체모션캡처 데이터를 현재 캐릭터 속도를 반영하여 수정한 뒤 분리된 원래 하체모션과 자연스럽게 교체한다. 본 논문에서 제안된 알고리즘을 이용하면, 키넥트 센서를 통해 사용자의 상체 모션을 그대로 반영함과 동시에 모션캡처 데이터를 이용하여 하체 동작을 실제 걷는 동작으로 바꾸어주기 때문에 사용자가 조정하는 3차원 캐릭터는 가상공간을 자연스럽게 내비게이션할 수 있다.

3차원 생물체 가시화 모델 구축장치 개발 및 성능평가 (Development and Evaluation of System for 3D Visualization Model of Biological Objects)

  • 황헌;최태현;김철수;이수희
    • Journal of Biosystems Engineering
    • /
    • 제26권6호
    • /
    • pp.545-552
    • /
    • 2001
  • Nondestructive methods such as ultrasonic and magnetic resonance imaging systems have many advantages but still much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct a biological object to obtain interior and exterior informations, 3D image visualization model from a series of sliced sectional images gives more useful information with relatively low cost. In this paper, a PC based automatic 3D visualization system is presented. The system is composed of three modules. The first module is the handling and image acquisition module. The handling module feeds and slices a cylindrical shape paraffin, which holds a biological object inside the paraffin. And the paraffin is kept being solid by cooling while being handled. The image acquisition modulo captures the sectional image of the object merged into the paraffin consecutively. The second one is the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last one is the image processing and visualization module, which processes a series of acquired sectional images and generates a 3D volumetric model. To verify the condition for the uniform slicing, normal directional forces of the cutting edge according to the various cutting angles were measured using a strain gauge and the amount of the sliced chips were weighed and analyzed. Once the 3D model was constructed on the computer, user could manipulate it with various transformation methods such as translation, rotation, and scaling including arbitrary sectional view.

  • PDF

3D 입체 영상시스템의 좌-우 영상에 대한 실시간 동기 에러 검출 및 보정 (Real-time Temporal Synchronization and Compensation in Stereoscopic Video)

  • 김기석;조재수;이광순;이응돈
    • 방송공학회논문지
    • /
    • 제18권5호
    • /
    • pp.680-690
    • /
    • 2013
  • 본 논문은 3D 입체 영상시스템의 좌-우 영상에 대한 실시간 동기 오차를 검출하고, 보정하는 방법을 제안한다. 스테레오스코픽 동영상의 경우 좌영상과 우영상의 동기가 편집과정의 실수나 영상전송 등의 다양한 이유로 시간적 동기가 맞지 않아 3D 입체 영상 품질이 떨어지는 문제점이 있다. 시간적 동기화의 목표는 두 비디오 시퀸스에서 동기오차를 검출하고, 검출된 동기오차만큼 보정하는 것이다. 이러한 동기오차의 문제점을 해결하기 위해 본 논문에서는 기존의 스페시오그램(spatiogram) 특징에 추가적인 컬러 분포의 변화 및 컬러 분포의 공간적인 변화 특징을 추가적으로 활용하여 좌-우 영상의 시간적 동기오차를 검출하고 보정하는 알고리즘을 제안한다. 먼저 동기오차를 검출하기 위해 좌-우 영상의 동기 프레임(pair frame)을 검출한다. 이 때, 단일 프레임만으로는 변화가 뚜렷하지 않으므로 일정한 시간의 블록 단위로 비교하는 방법을 새롭게 제안한다. 제안된 방법은 다양한 3D 콘텐츠 영상들의 실험을 통하여 그 효용성을 입증하였다.