• Title/Summary/Keyword: Vehicle camera system

Search Result 425, Processing Time 0.033 seconds

Autonomous Navigation of KUVE (KIST Unmanned Vehicle Electric) (KUVE (KIST 무인 주행 전기 자동차)의 자율 주행)

  • Chun, Chang-Mook;Suh, Seung-Beum;Lee, Sang-Hoon;Roh, Chi-Won;Kang, Sung-Chul;Kang, Yeon-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.617-624
    • /
    • 2010
  • This article describes the system architecture of KUVE (KIST Unmanned Vehicle Electric) and unmanned autonomous navigation of it in KIST. KUVE, which is an electric light-duty vehicle, is equipped with two laser range finders, a vision camera, a differential GPS system, an inertial measurement unit, odometers, and control computers for autonomous navigation. KUVE estimates and tracks the boundary of road such as curb and line using a laser range finder and a vision camera. It follows predetermined trajectory if there is no detectable boundary of road using the DGPS, IMU, and odometers. KUVE has over 80% of success rate of autonomous navigation in KIST.

Study on Algorithm of High-Speed Scanning System for Railway Vehicle Running Units Using High Performance Camera (고성능 카메라를 이용한 철도차량 주행장치용 고속스케닝시스템 알고리즘에 관한 연구)

  • Huh, Sung Bum;Lee, Hi Sung
    • Journal of the Korean Society of Safety
    • /
    • v.35 no.4
    • /
    • pp.9-14
    • /
    • 2020
  • It is necessary to apply a non-contact high-speed scanning system that can measure in real time in order to prevent the dropping and deformation of the main parts of railway vehicles during high-speed running. Recently, research on a scanning system that detects the deformation state of main parts from a video image taken using a high-performance camera has been actively pursued. In this study, we researched an analysis algorithm of a high-speed scanning system that uses a high-performance camera to monitor the deformation and drop-out state of the main components of the running units equipment in real time.

Nearby Vehicle Detection in the Adjacent Lane using In-vehicle Front View Camera (차량용 전방 카메라를 이용한 근거리 옆 차선 차량 검출)

  • Baek, Yeul-Min;Lee, Gwang-Gook;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.996-1003
    • /
    • 2012
  • We present a nearby vehicle detection method in the adjacent lane using in-vehicle front view camera. Nearby vehicles in adjacent lanes show various appearances according to their relative positions to the host vehicle. Therefore, most conventional methods use motion information for detecting nearby vehicles in adjacent lanes. However, these methods can only detect overtaking vehicles which have faster speed than the host vehicle. To solve this problem, we use the feature of regions where nearby vehicle can appear. Consequently, our method cannot only detect nearby overtaking vehicles but also stationary and same speed vehicles in adjacent lanes. In our experiment, we validated our method through various whether, road conditions and real-time implementation.

A Study on the ACC Safety Evaluation Method Using Dual Cameras (듀얼카메라를 활용한 ACC 안전성 평가 방법에 관한 연구)

  • Kim, Bong-Ju;Lee, Seon-Bong
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.57-69
    • /
    • 2022
  • Recently, as interest in self-driving cars has increased worldwide, research and development on the Advanced Driver Assist System is actively underway. Among them, the purpose of Adaptive Cruise Control (ACC) is to minimize the driver's driving fatigue through the control of the vehicle's longitudinal speed and relative distance. In this study, for the research of the ACC test in the real environment, the real-road test was conducted based on domestic-road test scenario proposed in preceding study, considering ISO 15622 test method. In this case, the distance measurement method using the dual camera was verified by comparing and analyzing the result of using the dual camera and the result of using the measurement equipment. As a result of the comparison, two results could be derived. First, the relative distance after stabilizing the ACC was compared. As a result of the comparison, it was found that the minimum error rate was 0.251% in the first test of scenario 8 and the maximum error rate was 4.202% in the third test of scenario 9. Second, the result of the same time was compared. As a result of the comparison, it was found that the minimum error rate was 0.000% in the second test of scenario 10 and the maximum error rate was 9.945% in the second test of scenario 1. However, the average error rate for all scenarios was within 3%. It was determined that the representative cause of the maximum error occurred in the dual camera installed in the test vehicle. There were problems such as shaking caused by road surface vibration and air resistance during driving, changes in ambient brightness, and the process of focusing the video. Accordingly, it was determined that the result of calculating the distance to the preceding vehicle in the image where the problem occurred was incorrect. In the development stage of ADAS such as ACC, it is judged that only dual cameras can reduce the cost burden according to the above derivation of test results.

Convergence CCTV camera embedded with Deep Learning SW technology (딥러닝 SW 기술을 이용한 임베디드형 융합 CCTV 카메라)

  • Son, Kyong-Sik;Kim, Jong-Won;Lim, Jae-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.1
    • /
    • pp.103-113
    • /
    • 2019
  • License plate recognition camera is dedicated device designed for acquiring images of the target vehicle for recognizing letters and numbers in a license plate. Mostly, it is used as a part of the system combined with server and image analysis module rather than as a single use. However, building a system for vehicle license plate recognition is costly because it is required to construct a facility with a server providing the management and analysis of the captured images and an image analysis module providing the extraction of numbers and characters and recognition of the vehicle's plate. In this study, we would like to develop an embedded type convergent camera (Edge Base) which can expand the function of the camera to not only the license plate recognition but also the security CCTV function together and to perform two functions within the camera. This embedded type convergence camera equipped with a high resolution 4K IP camera for clear image acquisition and fast data transmission extracted license plate area by applying YOLO, a deep learning software for multi object recognition based on open source neural network algorithm and detected number and characters of the plate and verified the detection accuracy and recognition accuracy and confirmed that this camera can perform CCTV security function and vehicle number plate recognition function successfully.

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

Improved Object Recognition using Multi-view Camera for ADAS (ADAS용 다중화각 카메라를 이용한 객체 인식 향상)

  • Park, Dong-hun;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.573-579
    • /
    • 2019
  • To achieve fully autonomous driving, the perceptual skills of the surrounding environment must be superior to those of humans. The $60^{\circ}$ angle, $120^{\circ}$ wide angle cameras, which are used primarily in autonomous driving, have their disadvantages depending on the viewing angle. This paper uses a multi-angle object recognition system to overcome each of the disadvantages of wide and narrow-angle cameras. Also, the aspect ratio of data acquired with wide and narrow-angle cameras was analyzed to modify the SSD(Single Shot Detector) algorithm, and the acquired data was learned to achieve higher performance than when using only monocular cameras.

Red Light Running Enforcement System Using Real Time Individual Vehicle Tracking

  • Lim, Dae-Woon;Jun, Joon-Suk;Park, Sung-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.115.5-115
    • /
    • 2002
  • In this paper, we introduce a system that detects all kinds of violations at a street intersection such as red light running, speed violation, stop line violation and lane violation by tracking individual vehicles. Two cameras are used for defecting violations. One is an analog camera for real-time tracking and the other is a digital camera for license plate reading. This system is connected to the traffic signal system controller and monitors the red, arrow, yellow and green phases of an approach. Two loops in the road are used to detect vehicle approach and speed. The system takes pictures of all vehicles passing a second loop and tracks the vehicles until they go out a street intersection...

  • PDF

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.11a
    • /
    • pp.207-217
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

Real-time Camera and Video Streaming Through Optimized Settings of Ethernet AVB in Vehicle Network System

  • An, Byoungman;Kim, Youngseop
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3025-3047
    • /
    • 2021
  • This paper presents the latest Ethernet standardization of in-vehicle network and the future trends of automotive Ethernet technology. The proposed system provides design and optimization algorithms for automotive networking technology related to AVB (Audio Video Bridge) technology. We present a design of in-vehicle network system as well as the optimization of AVB for automotive. A proposal of Reduced Latency of Machine to Machine (RLMM) plays an outstanding role in reducing the latency among devices. RLMM's approach to real-world experimental cases indicates a reduction in latency of around 41.2%. The setup optimized for the automotive network environment is expected to significantly reduce the time in the development and design process. The results obtained in the study of image transmission latency are trustworthy because average values were collected over a long period of time. It is necessary to analyze a latency between multimedia devices within limited time which will be of considerable benefit to the industry. Furthermore, the proposed reliable camera and video streaming through optimized AVB device settings would provide a high level of support in the real-time comprehension and analysis of images with AI (Artificial Intelligence) algorithms in autonomous driving.