• Title/Summary/Keyword: Simulator camera

Search Result 75, Processing Time 0.023 seconds

Development of Road Safety Estimation Method using Driving Simulator and Eye Camera (차량시뮬레이터 및 아이카메라를 이용한 도로안전성 평가기법 개발)

  • Doh, Tcheol-Woong;Kim, Won-Keun
    • International Journal of Highway Engineering
    • /
    • v.7 no.4 s.26
    • /
    • pp.185-202
    • /
    • 2005
  • In this research, to get over restrictions of a field expreiment, we modeled a planning road through the 3D Virtual Reality and achieved data about dynamic response related to sector fluctuation and about driver's visual behavior on testers' driving the Driving Simulator Car with Eye Camera. We made constant efforts to reduce the non-reality and side effect of Driving Simulator on maximizing the accord between motion reproduction and virtual reality based on data Driving Simulator's graphic module achieved by dynamic analysis module. Moreover, we achieved data of driver's natural visual behavior using Eye Camera(FaceLAB) that is able to make an expriment without such attaching equipments such as a helmet and lense. In this paper, to evaluate the level of road's safety, we grasp the meaning of the fluctuation of safety that drivers feel according to change of road geometric structure with methods of Driving Simulator and Eye Camera and investigate the relationship between road geometric structure and safety level. Through this process, we suggest the method to evaluate the road making drivers comfortable and pleasant from planning schemes.

  • PDF

Analytic simulator and image generator of multiple-scattering Compton camera for prompt gamma ray imaging

  • Kim, Soo Mee
    • Biomedical Engineering Letters
    • /
    • v.8 no.4
    • /
    • pp.383-392
    • /
    • 2018
  • For prompt gamma ray imaging for biomedical applications and environmental radiation monitoring, we propose herein a multiple-scattering Compton camera (MSCC). MSCC consists of three or more semiconductor layers with good energy resolution, and has potential for simultaneous detection and differentiation of multiple radio-isotopes based on the measured energies, as well as three-dimensional (3D) imaging of the radio-isotope distribution. In this study, we developed an analytic simulator and a 3D image generator for a MSCC, including the physical models of the radiation source emission and detection processes that can be utilized for geometry and performance prediction prior to the construction of a real system. The analytic simulator for a MSCC records coincidence detections of successive interactions in multiple detector layers. In the successive interaction processes, the emission direction of the incident gamma ray, the scattering angle, and the changed traveling path after the Compton scattering interaction in each detector, were determined by a conical surface uniform random number generator (RNG), and by a Klein-Nishina RNG. The 3D image generator has two functions: the recovery of the initial source energy spectrum and the 3D spatial distribution of the source. We evaluated the analytic simulator and image generator with two different energetic point radiation sources (Cs-137 and Co-60) and with an MSCC comprising three detector layers. The recovered initial energies of the incident radiations were well differentiated from the generated MSCC events. Correspondingly, we could obtain a multi-tracer image that combined the two differentiated images. The developed analytic simulator in this study emulated the randomness of the detection process of a multiple-scattering Compton camera, including the inherent degradation factors of the detectors, such as the limited spatial and energy resolutions. The Doppler-broadening effect owing to the momentum distribution of electrons in Compton scattering was not considered in the detection process because most interested isotopes for biomedical and environmental applications have high energies that are less sensitive to Doppler broadening. The analytic simulator and image generator for MSCC can be utilized to determine the optimal geometrical parameters, such as the distances between detectors and detector size, thus affecting the imaging performance of the Compton camera prior to the development of a real system.

A Web-Based Robot Simulator (웹 기반 로봇 시뮬레이터)

  • Hong, Soon-Hyuk;Lee, Sang-Hyun;Jeon, Jae-Wook;Yoon, Ji-Sup
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.3
    • /
    • pp.255-262
    • /
    • 2001
  • According to the advancement of web related technologies, many works on robots using these technologies, called web-based robots enables sharing of expensive equipments as well as control of remote robots. But none of the existing methods about web-based robots in-clude robot simulators in their web browser, which transfer appropriate information of a remote place to the local users. In this paper, a web-based robot simulator is proposed and developed to control a remote robot by using the web. The proposed simulator can transfer the 3D information about the remote robot to the local users by using 3D graphics, which has not been previously developed. Also, it sends the camera image of a remote place to the local users so that the users can use this camera image as well as 3D information in order to control the remote robot.

  • PDF

CG image product system by virtual control of simulator camera (시뮬레이터 카메라 가상제어형 VFX 영상제작시스템)

  • Du, Yi-chen;Dong, Xiao;Ko, Jae-Hyuk
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.195-200
    • /
    • 2017
  • The purpose of this study is to provide a virtual CG - based image production system for a simulator camera. The video production system proposed in this study is designed to synchronize the virtual camera of the graphic tool with the simulator camera at the shooting site and the CG application image editing, And can be easily and easily performed through a graphic tool by being controlled virtually through a control interface. Blaze Knights, an animation created by Dongseo University, was used to illustrate the superiority of the research results by comparing the progress and the amount of work before and after the application of the system. According to the results of the research, the CG artist's work efficiency is increased, while the physical constraint is minimized when the scene of the CG application image is minimized. It is possible to produce various scenes and minimize the re- We expect the contribution to the industry to be high.

Preliminary Study on Performance Evaluation of a Stacking-structure Compton Camera by Using Compton Imaging Simulator (Compton Imaging Simulator를 이용한 다층 구조 컴프턴 카메라 성능평가 예비 연구)

  • Lee, Se-Hyung;Park, Sung-Ho;Seo, Hee;Park, Jin-Hyung;Kim, Chan-Hyeong;Lee, Ju-Hahn;Lee, Chun-Sik;Lee, Jae-Sung
    • Progress in Medical Physics
    • /
    • v.20 no.2
    • /
    • pp.51-61
    • /
    • 2009
  • A Compton camera, which is based on the geometrical interpretation of Compton scattering, is a very promising gamma-ray imaging device considering its several advantages over the conventional gamma-ray imaging devices: high imaging sensitivity, 3-D imaging capability from a fixed position, multi-tracing functionality, and almost no limitation in photon energy. In the present study, a Monte Carlo-based, user-friendly Compton imaging simulator was developed in the form of a graphical user interface (GUI) based on Geant4 and $MATLAB^{TM}$. The simulator was tested against the experimental result of the double-scattering Compton camera, which is under development at Hanyang University in Korea. The imaging resolution of the simulated Compton image well agreed with that of the measured image. The imaging sensitivity of the measured data was 2~3 times higher than that of the simulated data, which is due to the fact that the measured data contains the random coincidence events. The performance of a stacking-structure type Compton camera was evaluated by using the simulator. The result shows that the Compton camera shows its highest performance when it uses 4 layers of scatterer detectors.

  • PDF

Test of Vision Stabilizer for Unmanned Vehicle Using Virtual Environment and 6 Axis Motion Simulator (가상 환경 및 6축 모션 시뮬레이터를 이용한 무인차량 영상 안정화 장치 시험)

  • Kim, Sunwoo;Ki, Sun-Ock;Kim, Sung-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.227-233
    • /
    • 2015
  • In this study, an indoor test environment was developed for studying the vision stabilizer of an unmanned vehicle, using a virtual environment and a 6-axis motion simulator. The real driving environment was replaced by a virtual environment based on the Aberdeen Proving Ground bump test course for military tank testing. The vehicle motion was reproduced by a 6-axis motion simulator. Virtual reality driving courses were displayed in front of the vision stabilizer, which was located on the top of the motion simulator. The performance of the stabilizer was investigated by checking the image of the camera, and the pitch and roll angles of the stabilizer captured by the IMU sensor of the camera.

Implementation of Virtual Realily Immersion System using Motion Vectors (모션벡터를 이용한 가상현실 체험 시스템의 구현)

  • 서정만;정순기
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.3
    • /
    • pp.87-93
    • /
    • 2003
  • The purpose of this research is to develop a virtual reality system which enables to actually experience the virtual reality through the visual sense of human. TSS was applied in tracing the movement of moving picture in this research. By applying TSS, it was possible to calculate multiple motion vectors from moving picture, and then camera's motion parameters were obtained by utilizing the relationship between the motion vectors. For the purpose of experiencing the virtual reality by synchronizing the camera's accelerated velocity and the simulator's movements, the relationship between the value of camera's accelerated velocity and the simulator's movements was analyzed and its result was applied to the neutral network training. It has been proved that the proposed virtual reality immersion system in this dissertation can dynamically control the movements of moving picture and can also operate the simulator quite similarly to the real movements of moving picture.

  • PDF

Simulator for Qualcomm Augmented Reality SDK (퀄컴 증강 현실 SDK 를 위한 시뮬레이터)

  • Tan, Chin Tong;Kang, Dae-Ki
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.75-77
    • /
    • 2011
  • A simulator in Android is developed for Qualcomm Augmented Reality (QCAR) software development kit (SDK). The main purpose is to replace the testing of application on actual Android device. Bugs in the application can be found easily when testing is done in the simulator (with support of Android emulator) before the testing on real device. The simulator does not require a camera in testing augmented reality application. Works included study on QCAR SDK's behavior is done to ensure that the simulator performs similar to the SDK. Description on how would the simulator works with QCAR SDK is included in the paper.

A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM (사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법)

  • Lee, Jae-Min;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

Development of the Simulator for FPC-G, the Focal Plane Fine Guiding Camera for SPICA

  • Pyo, Jeonghyun;Jeong, Woong-Seob;Lee, Chol;Kim, Son-Goo;Lee, Dae-Hee
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.1
    • /
    • pp.76.2-76.2
    • /
    • 2013
  • SPICA(SPace Infrared Telescope for Cosmology and Astrophysics) is an infrared space observatory with cooled telescope of 3 m aperture. Because of its large aperture, near- and mid-infrared instruments onboard SPICA require fine guidance with attitude accuracy less than 0.1 arcsecond. The FPC-G is a focal plane camera to achieve this high attitude accuracy and KASI is leading its development. The SPICA project is now under the Risk Mitigation Phase 2 (RMP2) and one of major risks is to satisfy the requirement of pointing and attitude control. To assess the impacts of disturbance sources on the attitude control and devise methods to mitigate possible risks, a software simulator of the FPC-G is under the development. In this presentation, we report the status of development of the simulator and the development plan during the RMP2.

  • PDF