• Title/Summary/Keyword: vehicle-mounted camera

Search Result 63, Processing Time 0.031 seconds

Lane Detection for Parking Violation Assessments

  • Kim, A-Ram;Rhee, Sang-Yong;Jang, Hyeon-Woong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.13-20
    • /
    • 2016
  • In this study, we propose a method to regulate parking violations using computer vision technology. A still color image of the parked vehicle under question is obtained by a camera mounted on enforcement vehicles. The acquired image is preprocessed through a morphological algorithm and binarized. The vehicle's shadows are detected from the binarized image, and lanes are identified using the information from the yellow parking lines that are drawn on the load. Whether parking is illegal is determined by the conformity of the lanes and the vehicle's shadow.

Design of a Wheeled Blimp

  • Sungchul Kang;Mihee Nam;Park, Changwoo;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.30.5-30
    • /
    • 2001
  • This paper describes a new design of blimp having wheeled vehicle part. This system can work both on the ground using wheeled vehicle and in the air using the floating capability of the blimp part. The passive wheeled mechanism in the vehicle part enables the stable taking off, landing on as well as it is greatly helpful to keep a stationary position on the floor. On the other hand, the floating capability enables the wheeled blimp to fly freely regardless of the ground condition or obstacles. The wheeled blimp can be used as an agent robot for the tole-presence application. Using multimedia devices such as camera, speaker, LCD and microphone mounted on the blimp surface, this system can get necessary information at the local site and communicate with person from a distance. As a typical tele-presence application, the wheeled blimp is currently being developed to a tole-guidance robot working in public indoor areas such 35 exhibition halls, departments, hospitals, etc ...

  • PDF

A Study on the Optimization Conditions for the Mounted Cameras on the Unmanned Aerial Vehicles(UAV) for Photogrammetry and Observations (무인비행장치용 측량 및 관측용 탑재 카메라의 최적화 조건 연구)

  • Hee-Woo Lee;Ho-Woong Shon;Tae-Hoon Kim
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_2
    • /
    • pp.1063-1071
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs, drones) are becoming increasingly useful in a variety of fields. Advances in UAV and camera technology have made it possible to equip them with ultra-high resolution sensors and capture images at low altitudes, which has improved the reliability and classification accuracy of object identification on the ground. The distinctive contribution of this study is the derivation of sensor-specific performance metrics (GRD/GSD), which shows that as the GSD increases with altitude, the GRD value also increases. In this study, we identified the characteristics of various onboard sensors and analysed the image quality (discrimination resolution) of aerial photography results using UAVs, and calculated the shooting conditions to obtain the discrimination resolution required for reading ground objects.

A vision-based system for inspection of expansion joints in concrete pavement

  • Jung Hee Lee ;bragimov Eldor ;Heungbae Gil ;Jong-Jae Lee
    • Smart Structures and Systems
    • /
    • v.32 no.5
    • /
    • pp.309-318
    • /
    • 2023
  • The appropriate maintenance of highway roads is critical for the safe operation of road networks and conserves maintenance costs. Multiple methods have been developed to investigate the surface of roads for various types of cracks and potholes, among other damage. Like road surface damage, the condition of expansion joints in concrete pavement is important to avoid unexpected hazardous situations. Thus, in this study, a new system is proposed for autonomous expansion joint monitoring using a vision-based system. The system consists of the following three key parts: (1) a camera-mounted vehicle, (2) indication marks on the expansion joints, and (3) a deep learning-based automatic evaluation algorithm. With paired marks indicating the expansion joints in a concrete pavement, they can be automatically detected. An inspection vehicle is equipped with an action camera that acquires images of the expansion joints in the road. You Only Look Once (YOLO) automatically detects the expansion joints with indication marks, which has a performance accuracy of 95%. The width of the detected expansion joint is calculated using an image processing algorithm. Based on the calculated width, the expansion joint is classified into the following two types: normal and dangerous. The obtained results demonstrate that the proposed system is very efficient in terms of speed and accuracy.

A Study on the Improvement of Hydrogen Detection Inspection Method of Hydrogen Cylinder on Hydrogen Bus (수소버스 사용 내압용기 수소검출량 검사방법 개선을 위한 연구)

  • Kim, Hyunjun;Weo, Unseok;Jo, Hyunwoo;Lee, Hyeoncheol;Hwang, Taejun;Lee, Hosang;Ryu, Ikhui;Choi, Sookwang;Oh, Youngkyu;Park, Sungwook
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.1
    • /
    • pp.51-56
    • /
    • 2021
  • As hydrogen is classified as an eco-friendly fuel, vehicles using hydrogen fuel are being developed worldwide. Vehicle fuel hydrogen is stored in cylinders at 70 MPa, so there is a high risk of explosion. Therefore, it is important to inspect hydrogen cylinders in used-vehicles. This study was conducted to improve the inspection method of the cylinders currently mounted on used-hydrogen buses. The inspection method is an image analysis method using a camera. Calcaulation algorithm was developed to quantitatively chech the amount of hydrogen leakage by the image method. As a result of adding a contact angle element to the calculation algorithm suggested by the GTR regulation and comparing it with the experimental data of the GTR regulation, the algorithm reliability was 94%, which secured similarity.

A Lane-Departure Identification Based on Linear Regression and Symmetry of Lane-Related Parameters (차선관련 파라미터의 대칭성과 선형회귀에 기반한 차선이탈 인식)

  • Yi Un-Kun;Lee Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.435-444
    • /
    • 2005
  • This paper presents a lane-departure identification (LDI) algorithm for a traveling vehicle on a structured road. The algorithm makes up for the weak points of the former method based on EDF[1] by introducing a Lane Boundary Pixel Extractor (LBPE), the well known Hough transform, and liner regression. As a filter to extract pixels expected to be on lane boundaries, the LBPE plays an important role in enhancing the robustness of LDI. Utilizing the pixels from the LBPE the Hough transform provides the lane-related parameters composed of orientation and distance, which are used in the LDI. The proposed LDI is based on the fact the lane-related parameters of left and right lane boundaries are symmetrical as for as the optical axis of a camera mounted on a vehicle is coincident with the center of lane; as the axis deviates from the center of lane, the symmetrical property is correspondingly lessened. In addition, the LDI exploits a linear regression of the lane-related parameters of a series of successive images. It plays the key role of determining the trend of a vehicle's traveling direction and minimizing the noise effect. Except for the two lane-related parameters, the proposed algorithm does not use other information such as lane width, a curvature, time to lane crossing, and of feet between the center of a lane and the optical axis of a camera. The system performed successfully under various degrees of illumination and on various road types.

Design of Vehicle-mounted Loading and Unloading Equipment and Autonomous Control Method using Deep Learning Object Detection (차량 탑재형 상·하역 장비의 설계와 딥러닝 객체 인식을 이용한 자동제어 방법)

  • Soon-Kyo Lee;Sunmok Kim;Hyowon Woo;Suk Lee;Ki-Baek Lee
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.79-91
    • /
    • 2024
  • Large warehouses are building automation systems to increase efficiency. However, small warehouses, military bases, and local stores are unable to introduce automated logistics systems due to lack of space and budget, and are handling tasks manually, failing to improve efficiency. To solve this problem, this study designed small loading and unloading equipment that can be mounted on transportation vehicles. The equipment can be controlled remotely and is automatically controlled from the point where pallets loaded with cargo are visible using real-time video from an attached camera. Cargo recognition and control command generation for automatic control are achieved through a newly designed deep learning model. This model is designed to be optimized for loading and unloading equipment and mission environments based on the YOLOv3 structure. The trained model recognized 10 types of palettes with different shapes and colors with an average accuracy of 100% and estimated the state with an accuracy of 99.47%. In addition, control commands were created to insert forks into pallets without failure in 14 scenarios assuming actual loading and unloading situations.

UAV-based Image Acquisition, Pre-processing, Transmission System Using Mobile Communication Networks (이동통신망을 활용한 무인비행장치 기반 이미지 획득, 전처리, 전송 시스템)

  • Park, Jong-Hong;Ahn, Il-Yeop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.594-596
    • /
    • 2022
  • This paper relates to a system for pre-processing high-definition images acquired through a camera mounted on an unmanned aerial vehicle(UAV) and transmitting them to a server through a mobile communication network. In the case of the existing UAV system for image acquisition service, the acquired image was stored in the external storage device of the camera mounted on the UAV, and the image was checked by directly moving the storage device after the flight was completed. In the case of this method, there is a limitation in that it is impossible to check whether image acquisition or pre-processing is properly performed before directly checking image data through an external storage device. In addition, since the data is stored only in an external storage device, there is a disadvantage that data sharing is cumbersome. In this paper, to solve the above problems, we propose a system that can remotely check images in real time. Furthermore, we propose a system and method capable of performing pre-processing such as geo-tagging and transmission through a mobile communication network in addition to image acquisition through shooting in an UAV.

  • PDF

Underwater Docking of an AUV Using a Visual Servo Controller (비쥬얼 서보 제어기를 이용한 자율무인잠수정의 도킹)

  • Lee, Pan-Mook;Jeon, Bong-Hwan;Lee, Chong-Moo
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.142-148
    • /
    • 2002
  • Autonomous underwater vehicles (AUVs) are unmanned underwater vessels to investigate sea environments, oceanography and deep-sea resources autonomously. Docking systems are required to increase the capability of the AUVs to recharge the batteries and to transmit data in real time for specific underwater works, such as repeated jobs at sea bed. This paper presents a visual servo control system for an AUV to dock into an underwater station with a camera mounted at the nose center of the AUV. To make the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and derives a state equation for the visual servoing AUV. This paper proposes a discrete-time MIMO controller minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servoing AUV, simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

  • PDF

Speeding Detection and Time by Time Visualization based on Vehicle Trajectory Data

  • Onuean, Athita;Jung, Hanmin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.593-596
    • /
    • 2018
  • The speed of vehicles has remained a significant factor that influences the severity of accidents and traffic accident rate in many parts of the world including South Korea. This behavior where drivers drive at speeds which exceed a posted safe threshold is known as 'speeding'. Over the past twenty years, the Korean National Police Agency (NPA) has become aware of an increased frequency of drivers who are speeding. Therefore, fixed-type ASE systems [1] have been installed on hazardous road sections of many highways. These system monitor vehicle speeds using a camera. However, the use of ASE systems has changed the behavior of the drivers. Specifically, drivers reduce speed or avoid the route where the cameras are mounted. It is not practical to install cameras at every possible location. Therefore, it is challenging to thoroughly explore the location where speeding occurs. In view of these problems, the author of this paper designed and implemented a prototype visualization system in which point and color are used to show vehicle location and associated over-speed information. All of this information was used to create a comprehensive visualization application to show information about vehicle driving. In this paper, we present an approach detecting vehicles moving at speeds which exceed a threshold and visualizing the points those violations occur on a map. This was done using vehicle trajectory data collected in Daegu city. We propose steps for exploring the data collected from those sensors. The resulting mapping has two layers. The first layer contains the dynamic vehicle trajectory data. The second underlying layer contains the static road networks. This allows comparing the speed of vehicles on roads with the known maximum safe speed of those roads, and presents the results with a visualization tool. We also compared data about people who drive over threshold safe speeds on each road on days and weekends based on vehicle trajectories. Finally, our study suggests improved times and locations where law enforcement should use monitoring with speed cameras, and where they should be stricter with traffic law enforcement. We learned that people will drive over the speed limit at midnight more than 1.9 times as often when compared with rush hour traffic at 8 o'clock in the morning, and 4.5 times as often when compared with traffic at 7 o'clock in the evening. Our study can benefit the government by helping them select better locations for installation of speed cameras. This would ultimately reduce police labor in traffic speed enforcement, and also has the potential to improve traffic safety in Daegu city.

  • PDF