• Title/Summary/Keyword: driving image generation

Search Result 27, Processing Time 0.019 seconds

Development of car driving trainer under PC environment (PC 기반형 자동차 운전 연습기 개발)

  • Lee, Seung-Ho;Kim, Sung-Duck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.4
    • /
    • pp.415-421
    • /
    • 1997
  • A car driving trainer for beginners developed under PC-based environment is described in this paper. For this trainer, a hardware is implemented as a practice car, and a trainer program is designed by computer image generation method to display 3-dimensional images on a CRT monitor. The trainer program consists of 3 main parts, that is, a speed estimate part, a wheel trace calculation part and a driving image generation part. Furthermore, a map editor is also installed for taking any test drive. After comparing this driving trainer to specify it was verified that the developed car driving trainer showed has good performances, such as lower cost, higher resolution and better image display speed.

  • PDF

A Real-Time Graphic Driving Simulator Using Virtual Reality Technique (가상현실을 이용한 실시간 차량 그래픽 주행 시뮬레이터)

  • Jang, Jae-Won;Son, Kwon;Choi, Kyung-Hyun;Song, Nam-Yong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.7
    • /
    • pp.80-89
    • /
    • 2000
  • Driving simulators provide engineers with a power tool in the development and modification stages of vehicle models. One of the most important factors to realistic simulations is the fidelity obtained by a motion bed and a real-time visual image generation algorithm. Virtual reality technology has been widely used to enhance the fidelity of vehicle simulators. This paper develops the virtual environment for such visual system as head-mounted display for a vehicle driving simulator. Virtual vehicle and environment models are constructed using the object-oriented analysis and design approach. Based on the object model, a three-dimensional graphic model is completed with CAD tools such as Rhino and Pro/ENGINEER. For real-time image generation, the optimized IRIS Performer 3D graphics library is embedded with the multi-thread methodology. The developed software for a virtual driving simulator offers an effective interface to virtual reality devices.

  • PDF

Construction of Virtual Environment for a Vehicle Simulator (자동차 시뮬레이터의 가상환경 구성에 대한 연구)

  • Chang, Chea-Won;Son, Kwon;Choi, Kyung-Hyun
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.8 no.4
    • /
    • pp.158-168
    • /
    • 2000
  • Vehicle driving simulators can provide engineers with benefits on the development and modification of vehicle models. One of the most important factors to realistic simulations is the fidelity given by a motion system and a real-time visual image generation system. Virtual reality technology has been widely used to achieve high fidelity. In this paper the virtual environment including a visual system like a head-mounted display is developed for a vehicle driving simulator system by employing the virtual reality technique. virtual vehicle and environment models are constructed using the object-oriented analysis and design approach. Accordint to the object model a three dimensional graphic model is developed with CAD tools such as Rhino and Pro/E. For the real-time image generation the optimized IRIS Performer 3D graphics library is embedded with the multi-thread methodology. Compared with the single loop apprach the proposed methodology yields an acceptable image generation speed 20 frames/sec for the simulator.

  • PDF

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

Depth Generation using Bifocal Stereo Camera System for Autonomous Driving (자율주행을 위한 이중초점 스테레오 카메라 시스템을 이용한 깊이 영상 생성 방법)

  • Lee, Eun-Kyung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1311-1316
    • /
    • 2021
  • In this paper, we present a bifocal stereo camera system combining two cameras with different focal length cameras to generate stereoscopic image and their corresponding depth map. In order to obtain the depth data using the bifocal stereo camera system, we perform camera calibration to extract internal and external camera parameters for each camera. We calculate a common image plane and perform a image rectification for generating the depth map using camera parameters of bifocal stereo camera. Finally we use a SGM(Semi-global matching) algorithm to generate the depth map in this paper. The proposed bifocal stereo camera system can performs not only their own functions but also generates distance information about vehicles, pedestrians, and obstacles in the current driving environment. This made it possible to design safer autonomous vehicles.

Characteristics of Motion-blur Free TFT-LCD using Short Persistent CCFL in Blinking Backlight Driving

  • Han, Jeong-Min;Ok, Chul-Ho;Hwang, Jeoung-Yeon;Seo, Dae-Shik
    • Transactions on Electrical and Electronic Materials
    • /
    • v.8 no.4
    • /
    • pp.166-169
    • /
    • 2007
  • In applying LCD to TV application, one of the most significant factors to be improved is image sticking on the moving picture. LCD is different from CRT in the sense that it's continuous passive device, which holds images in entire frame period, while impulse type device generate image in very short time. To reduce image sticking problem related to hold type display mode, we made an experiment to drive TN-LCD like CRT. We made articulate images by turn on-off backlight, and we realized the ratio of Back Light on-off time by counting between on time and off time for video signal input during 1 frame (16.7 ms). Conventional CCFL (cold cathode fluorescent lamp) cannot follow fast on-off speed, so we evaluated new fluorescent substances of light source to improve residual light characteristic of CCFL. We realized articulate image generation similar to CRT by CCFL blinking drive and TN-LCD overdriving. As a result, reduced image sticking phenomenon was validated by naked eye and response time measurement.

Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity (딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법)

  • Kim, Hyun-Koo;Yoo, Kook-Yeol;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

PathGAN: Local path planning with attentive generative adversarial networks

  • Dooseop Choi;Seung-Jun Han;Kyoung-Wook Min;Jeongdan Choi
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.1004-1019
    • /
    • 2022
  • For autonomous driving without high-definition maps, we present a model capable of generating multiple plausible paths from egocentric images for autonomous vehicles. Our generative model comprises two neural networks: feature extraction network (FEN) and path generation network (PGN). The FEN extracts meaningful features from an egocentric image, whereas the PGN generates multiple paths from the features, given a driving intention and speed. To ensure that the paths generated are plausible and consistent with the intention, we introduce an attentive discriminator and train it with the PGN under a generative adversarial network framework. Furthermore, we devise an interaction model between the positions in the paths and the intentions hidden in the positions and design a novel PGN architecture that reflects the interaction model for improving the accuracy and diversity of the generated paths. Finally, we introduce ETRIDriving, a dataset for autonomous driving, in which the recorded sensor data are labeled with discrete high-level driving actions, and demonstrate the state-of-the-art performance of the proposed model on ETRIDriving in terms of accuracy and diversity.

Development of Real-time Traffic Information Generation Technology Using Traffic Infrastructure Sensor Fusion Technology (교통인프라 센서융합 기술을 활용한 실시간 교통정보 생성 기술 개발)

  • Sung Jin Kim;Su Ho Han;Gi Hoan Kim;Jung Rae Kim
    • Journal of Information Technology Services
    • /
    • v.22 no.2
    • /
    • pp.57-70
    • /
    • 2023
  • In order to establish an autonomous driving environment, it is necessary to study traffic safety and demand prediction by analyzing information generated from the transportation infrastructure beyond relying on sensors by the vehicle itself. In this paper, we propose a real-time traffic information generation method using sensor convergence technology of transportation infrastructure. The proposed method uses sensors such as cameras and radars installed in the transportation infrastructure to generate information such as crosswalk pedestrian presence or absence, crosswalk pause judgment, distance to stop line, queue, head distance, and car distance according to each characteristic. create information An experiment was conducted by comparing the proposed method with the drone measurement result by establishing a demonstration environment. As a result of the experiment, it was confirmed that it was possible to recognize pedestrians at crosswalks and the judgment of a pause in front of a crosswalk, and most data such as distance to the stop line and queues showed more than 95% accuracy, so it was judged to be usable.

Lane Departure Warning System using Deep Learning (딥러닝을 이용한 차로이탈 경고 시스템)

  • Choi, Seungwan;Lee, Keontae;Kim, Kwangsoo;Kwak, Sooyeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.2
    • /
    • pp.25-31
    • /
    • 2019
  • As artificial intelligence technology has been developed rapidly, many researchers who are interested in next-generation vehicles have been studying on applying the artificial intelligence technology to advanced driver assistance systems (ADAS). In this paper, a method of applying deep learning algorithm to the lane departure warning system which is one of the main components of the ADAS was proposed. The performance of the proposed method was evaluated by taking a comparative experiments with the existing algorithm which is based on the line detection using image processing techniques. The experiments were carried out for two different driving situations with image databases for driving on a highway and on the urban streets. The experimental results showed that the proposed system has higher accuracy and precision than the existing method under both situations.