• Title/Summary/Keyword: Driver assistance system

Search Result 185, Processing Time 0.026 seconds

Comparison of Deep Learning Algorithm in Bus Boarding Assistance System for the Visually Impaired using Deep Learning and Traffic Information Open API (딥러닝과 교통정보 Open API를 이용한 시각장애인 버스 탑승 보조 시스템에서 딥러닝 알고리즘 성능 비교)

  • Kim, Tae hong;Yeo, Gil Su;Jeong, Se Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.388-390
    • /
    • 2021
  • This paper introduces a system that can help visually impaired people to board a bus using an embedded board with keypad, dot matrix, lidar sensor, NFC reader, a public data portal Open API system, and deep learning algorithm (YOLOv5). The user inputs the desired bus number through the NFC reader and keypad, and then obtains the location and expected arrival time information of the bus through the Open API real-time data through the voice output entered into the system. In addition, by displaying the bus number as the dot matrix, it can help the bus driver to wait for the visually impaired, and at the same time, a deep learning algorithm (YOLOv5) recognizes the bus number that stops in real time and detects the distance to the bus with a distance detection sensor such as lidar sensor.

  • PDF

Technology Acceptance Modeling based on User Experience for Autonomous Vehicles

  • Cho, Yujun;Park, Jaekyu;Park, Sungjun;Jung, Eui S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.87-108
    • /
    • 2017
  • Objective: The purpose of this study was to precede the acceptance study based on automation steps and user experience that was lacked in the past study on the core technology of autonomous vehicle, ADAS. The first objective was to construct the acceptance model of ADAS technology that is the core technology, and draw factors that affect behavioral intention through user experience-based evaluation by applying driving simulator. The second one was to see the change of factors on automation step of autonomous vehicle through the UX/UA score. Background: The number of vehicles with the introduction of ADAS is increasing, and it caused change of interaction between vehicle and driver as automation is being developed on the particular drive factor. For this reason, it is becoming important to study the technology acceptance on how driver can actively accept giving up some parts of automated drive operation and handing over the authority to vehicle. Method: We organized the study model and items through literature investigation and the scenario according to the 4 stages of automation of autonomous vehicle, and preceded acceptance assessment using driving simulator. Total 68 men and woman were participated in this experiment. Results: We drew results of Performance Expectancy (PE), Social Influence (SI), Perceived Safety (PS), Anxiety (AX), Trust (T) and Affective Satisfaction (AS) as the factors that affect Behavioral Intention (BI). Also the drawn factors shows that UX/UA score has a significant difference statistically according to the automation steps of autonomous vehicle, and UX/UA tends to move up until the stage 2 of automation, and at stage 3 it goes down to the lowest level, and it increases a little or stays steady at stage 4. Conclusion and Application: First, we presented the acceptance model of ADAS that is the core technology of autonomous vehicle, and it could be the basis of the future acceptance study of the ADAS technology as it verifies through user experience-based assessment using driving simulator. Second, it could be helpful to the appropriate ADAS development in the future as drawing the change of factors and predicting the acceptance level according to the automation stages of autonomous vehicle through UX/UA score, and it could also grasp and avoid the problem that affect the acceptance level. It is possible to use these study results as tools to test validity of function before ADAS offering company launches the products. Also it will help to prevent the problems that could be caused when applying the autonomous vehicle technology, and to establish technology that is easily acceptable for drivers, so it will improve safety and convenience of drivers.

Fast On-Road Vehicle Detection Using Reduced Multivariate Polynomial Classifier (축소 다변수 다항식 분류기를 이용한 고속 차량 검출 방법)

  • Kim, Joong-Rock;Yu, Sun-Jin;Toh, Kar-Ann;Kim, Do-Hoon;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8A
    • /
    • pp.639-647
    • /
    • 2012
  • Vision-based on-road vehicle detection is one of the key techniques in automotive driver assistance systems. However, due to the huge within-class variability in vehicle appearance and environmental changes, it remains a challenging task to develop an accurate and reliable detection system. In general, a vehicle detection system consists of two steps. The candidate locations of vehicles are found in the Hypothesis Generation (HG) step, and the detected locations in the HG step are verified in the Hypothesis Verification (HV) step. Since the final decision is made in the HV step, the HV step is crucial for accurate detection. In this paper, we propose using a reduced multivariate polynomial pattern classifier (RM) for the HV step. Our experimental results show that the RM classifier outperforms the well-known Support Vector Machine (SVM) classifier, particularly in terms of the fast decision speed, which is suitable for real-time implementation.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.

Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens (비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법)

  • Cho, Young-Ju;Kim, Sung-Hee;Park, Ji-Young;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.23-31
    • /
    • 2010
  • Due to the fact that fisheye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Not only use the camera as a viewing system, but also as a camera sensor, camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. In this thesis, we introduce a geometric correction technique to minimize the loss of the image data from a vehicle fish-eye lens having a field of view over $180^{\circ}$, and a asymmetric distortion. Geometric correction is a process in which a camera model with a distortion model is established, and then a corrected view is generated after camera parameters are calculated through a calibration process. First, the FOV model to imitate a asymmetric distortion configuration is used as the distortion model. Then, we need to unify the axis ratio because a horizontal view of the vehicle fish-eye lens is asymmetrically wide for the driver, and estimate the parameters by applying a non-linear optimization algorithm. Finally, we create a corrected view by a backward mapping, and provide a function to optimize the ratio for the horizontal and vertical axes. This minimizes image data loss and improves the visual perception when the input image is undistorted through a perspective projection.

Development of Vehicle LDW Application Service using AUTOSAR Platform on Multi-Core MCU (멀티코어 상의 AUTOSAR 플랫폼을 활용한 차량용 LDW 응용 서비스 개발)

  • Park, Mi-Ryong;Kim, Dongwon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.113-120
    • /
    • 2014
  • In this paper, we examine Asymmetric Multi-Processing Environment to provide LDW service. Asymmetric Multi-Processing Environment consists of high-speed MCU to support rapid image processing and low-speed MCU for controlling with other ECU at the control domain. Also we designed rapid image process application and LDW application Software Component(SW-C) according to the development process rule of AUTOSAR. To communicate between two MCUs, timer based polling based IPC was designed. Also to communicate with other ECUs(Electronic Control Units), we designed CAN messages to provide alarm information and receiving CAN message to catch the Turn signal. We confirm the possibility of the various ADAS development using an Asymmetric Multi-Processing Environment and AUTOSAR platform. We also expect providing ISO 26262 functional safety.

Multiple Vehicles Tracking via sequential posterior estimation (순차적인 사후 추정에 의한 다중 차량 추적)

  • Lee, Won-Ju;Yoon, Chang-Young;Lee, Hee-Jin;Kim, Eun-Tai;Park, Mignon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.1
    • /
    • pp.40-49
    • /
    • 2007
  • In a visual driver-assistance system, separating moving objects from fixed objects are an important problem to maintain multiple hypothesis for the state. Color and edge-based tracker can often be 'distracted' causing them to track the wrong object. Many researchers have dealt with this problem by using multiple features, as it is unlikely that all will be distracted at the same time. In this paper, we improve the accuracy and robustness of real-time tracking by combining a color histogram feature with a brightness of Optical Flow-based feature under a Sequential Monte Carlo framework. And it is also excepted from Tracking as time goes on, reducing density by Adaptive Particles Number in case of the fixed object. This new framework makes two main contributions. The one is about the prediction framework which separating moving objects from fixed objects and the other is about measurement framework to get a information from the visual data under a partial occlusion.

Robust Lane Detection Method in Varying Road Conditions (도로 환경 변화에 강인한 차선 검출 방법)

  • Kim, Byeoung-Su;Kim, Whoi-Yul
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.88-93
    • /
    • 2012
  • Lane detection methods using camera, which are part of the driver assistance system, have been developed due to the growth of the vehicle technologies. However, lane detection methods are often failed by varying road conditions such as rainy weather and degraded lanes. This paper proposes a method for lane detection which is robust in varying road condition. Lane candidates are extracted by intensity comparison and lane detection filter. Hough transform is applied to compute the lane pair using lane candidates which is straight line in image. Then, a curved lane is calculated by using B-Snake algorithm. Also, weighting value is computed using previous lane detection result to detect the lanes even in varying road conditions such as degraded/missed lanes. Experimental results proved that the proposed method can detect the lane even in challenging road conditions because of weighting process.

Video Based Tail-Lights Status Recognition Algorithm (영상기반 차량 후미등 상태 인식 알고리즘)

  • Kim, Gyu-Yeong;Lee, Geun-Hoo;Do, Jin-Kyu;Park, Keun-Soo;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.10
    • /
    • pp.1443-1449
    • /
    • 2013
  • Automatic detection of vehicles in front is an integral component of many advanced driver-assistance system, such as collision mitigation, automatic cruise control, and automatic head-lamp dimming. Regardless day and night, tail-lights play an important role in vehicle detecting and status recognizing of driving in front. However, some drivers do not know the status of the tail-lights of vehicles. Thus, it is required for drivers to inform status of tail-lights automatically. In this paper, a recognition method of status of tail-lights based on video processing and recognition technology is proposed. Background estimation, optical flow and Euclidean distance is used to detect vehicles entering tollgate. Then saliency map is used to detect tail-lights and recognize their status in the Lab color coordinates. As results of experiments of using tollgate videos, it is shown that the proposed method can be used to inform status of tail-lights.

Interactive ADAS development and verification framework based on 3D car simulator (3D 자동차 시뮬레이터 기반 상호작용형 ADAS 개발 및 검증 프레임워크)

  • Cho, Deun-Sol;Jung, Sei-Youl;Kim, Hyeong-Su;Lee, Seung-gi;Kim, Won-Tae
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.970-977
    • /
    • 2018
  • The autonomous vehicle is based on an advanced driver assistance system (ADAS) consisting of a sensor that collects information about the surrounding environment and a control module that determines the measured data. As interest in autonomous navigation technology grows recently, an easy development framework for ADAS beginners and learners is needed. However, existing development and verification methods are based on high performance vehicle simulator, which has drawbacks such as complexity of verification method and high cost. Also, most of the schemes do not provide the sensing data required by the ADAS directly from the simulator, which limits verification reliability. In this paper, we present an interactive ADAS development and verification framework using a 3D vehicle simulator that overcomes the problems of existing methods. ADAS with image recognition based artificial intelligence was implemented as a virtual sensor in a 3D car simulator, and autonomous driving verification was performed in real scenarios.