• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.036 seconds

Deep Learning Based Emergency Response Traffic Signal Control System

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.121-129
    • /
    • 2023
  • In this paper, we developed a traffic signal control system for emergency situations that can minimize loss of property and life by actively controlling traffic signals in a certain section in response to emergency situations. When the emergency vehicle terminal transmits an emergency signal including identification information and GPS information, the surrounding image is obtained from the camera, and the object is analyzed based on deep learning to output object information having information such as the location, type, and size of the object. After generating information tracking this object and detecting the signal system, the signal system is switched to emergency mode to identify and track the emergency vehicle based on the received GPS information, and to transmit emergency control signals based on the emergency vehicle's traveling route. It is a system that can be transmitted to a signal controller. This system prevents the emergency vehicle from being blocked by an emergency control signal that is applied first according to an emergency signal, thereby minimizing loss of life and property due to traffic obstacles.

A Study of Tram-Pedestrian Collision Prediction Method Using YOLOv5 and Motion Vector (YOLOv5와 모션벡터를 활용한 트램-보행자 충돌 예측 방법 연구)

  • Kim, Young-Min;An, Hyeon-Uk;Jeon, Hee-gyun;Kim, Jin-Pyeong;Jang, Gyu-Jin;Hwang, Hyeon-Chyeol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.561-568
    • /
    • 2021
  • In recent years, autonomous driving technologies have become a high-value-added technology that attracts attention in the fields of science and industry. For smooth Self-driving, it is necessary to accurately detect an object and estimate its movement speed in real time. CNN-based deep learning algorithms and conventional dense optical flows have a large consumption time, making it difficult to detect objects and estimate its movement speed in real time. In this paper, using a single camera image, fast object detection was performed using the YOLOv5 algorithm, a deep learning algorithm, and fast estimation of the speed of the object was performed by using a local dense optical flow modified from the existing dense optical flow based on the detected object. Based on this algorithm, we present a system that can predict the collision time and probability, and through this system, we intend to contribute to prevent tram accidents.

A Study on Interior Simulation based on Real-Room without using AR Platforms (AR 플랫폼을 사용하지 않는 실제 방 기반 인테리어 시뮬레이션 연구)

  • Choi, Gyoo-Seok;Kim, Joon-Geon;Lim, Chang-Muk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.111-120
    • /
    • 2022
  • It is essential to make a purchase decision to make sure that the furniture matches well with other structures in the room. Moreover, in the Untact Marketing situation caused by the COVID-19 crisis, this is becoming an even more impact factor. Accordingly, methods of measuring length using AR(Augmented Reality) are emerging with the advent of AR open sources such as ARCore and ARKit for furniture arrangement interior simulation. Since this existing method using AR generates a Depth Map based on a flat camera image and it also involves complex three-dimensional calculations, limitations are revealed in work that requires the information of accurate room size using a smartphone. In this paper, we propose a method to accurately measure the size of a room using only the accelerometer and gyroscope sensors built in smartphones without using ARCore or ARKit. In addition, as an example of application using the presented technique, a method for applying a pre-designed room interior to each room is presented.

Application of Deep Learning-based Object Detection and Distance Estimation Algorithms for Driving to Urban Area (도심로 주행을 위한 딥러닝 기반 객체 검출 및 거리 추정 알고리즘 적용)

  • Seo, Juyeong;Park, Manbok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.3
    • /
    • pp.83-95
    • /
    • 2022
  • This paper proposes a system that performs object detection and distance estimation for application to autonomous vehicles. Object detection is performed by a network that adjusts the split grid to the input image ratio using the characteristics of the recently actively used deep learning model YOLOv4, and is trained to a custom dataset. The distance to the detected object is estimated using a bounding box and homography. As a result of the experiment, the proposed method improved in overall detection performance and processing speed close to real-time. Compared to the existing YOLOv4, the total mAP of the proposed method increased by 4.03%. The accuracy of object recognition such as pedestrians, vehicles, construction sites, and PE drums, which frequently occur when driving to the city center, has been improved. The processing speed is approximately 55 FPS. The average of the distance estimation error was 5.25m in the X coordinate and 0.97m in the Y coordinate.

Freeway Bus-Only Lane Enforcement System Using Infrared Image Processing Technique (적외선 영상검지 기술을 활용한 고속도로 버스전용차로 단속시스템 개발)

  • Jang, Jinhwan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.67-77
    • /
    • 2022
  • An automatic freeway bus-only lane enforcement system was developed and assessed in a real-world environment. Observation of a bus-only lane on the Youngdong freeway, South Korea, revealed that approximately 99% of the vehicles violated the high-occupancy vehicle (HOV) lane regulation. However, the current enforcement by the police not only exhibits a low enforcement rate, but also induces unnecessary safety and delay concerns. Since vehicles with six passengers or higher are permitted to enter freeway bus-only lanes, identifying the number of passengers in a vehicle is a core technology required for a freeway bus-only lane enforcement system. To that end, infrared cameras and the You Only Look Once (YOLOv5) deep learning algorithm were utilized. For assessment of the performance of the developed system, two environments, including a controlled test-bed and a real-world freeway, were used. As a result, the performances under the test-bed and the real-world environments exhibited 7% and 8% errors, respectively, indicating satisfactory outcomes. The developed system would contribute to an efficient freeway bus-only lane operations as well as eliminate safety and delay concerns caused by the current manual enforcement procedures.

Smart Radar System for Life Pattern Recognition (생활패턴 인지가 가능한 스마트 레이더 시스템)

  • Sang-Joong Jung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.91-96
    • /
    • 2022
  • At the current camera-based technology level, sensor-based basic life pattern recognition technology has to suffer inconvenience to obtain accurate data, and commercial band products are difficult to collect accurate data, and cannot take into account the motive, cause, and psychological effect of behavior. the current situation. In this paper, radar technology for life pattern recognition is a technology that measures the distance, speed, and angle with an object by transmitting a waveform designed to detect nearby people or objects in daily life and processing the reflected received signal. It was designed to supplement issues such as privacy protection in the existing image-based service by applying it. For the implementation of the proposed system, based on TI IWR1642 chip, RF chipset control for 60GHz band millimeter wave FMCW transmission/reception, module development for distance/speed/angle detection, and technology including signal processing software were implemented. It is expected that analysis of individual life patterns will be possible by calculating self-management and behavior sequences by extracting personalized life patterns through quantitative analysis of life patterns as meta-analysis of living information in security and safe guards application.

A Simple Model of Shrinkage Cracking Development for Kaolinite (수축 균열 발달 과정을 위한 단순 모델)

  • Min, Tuk-Ki;Nhat, Vo Dai
    • Journal of the Korean Geotechnical Society
    • /
    • v.23 no.9
    • /
    • pp.29-37
    • /
    • 2007
  • The experiments have been conducted on Kaolinite in laboratory to investigate the development of shrinkage cracking and propose a simple model. Image analysis method consisting of control point selection(CPS) technique is used to process and analyze images of soil cracking captured by a digital camera. The distributions of crack length increment and crack area increment vary as a three-step process. These steps are regarded as stages of soil cracking. They are in turn primary crack, secondary crack and shrinkage crack stages. In case of crack area, the primary and secondary stages end at normalized gravimetric water content(NGWC) of 0.92 and 0.70 for different specimen thicknesses respectively. In addition, the primary stage in case of crack length also ends at NGWC of 0.92 while the secondary stage stops at NGWC of 0.79, 0.82, and 0.85 for the sample thicknesses of 0.5, 1.0, and 2.0 cm respectively Based on the experimental results, the distributions of crack length increment and crack area increment appear to be linear with a decrease of NGWC. Therefore, the development of shrinkage cracking is proposed typically by a simple model functioned by a combination of three linear expressions.

Scene Change Detection Algorithm for Video Abstract on Specific Movie (특수 영상에서 비디오 요약을 위한 장면 전환 검출 알고리즘)

  • Chung, Myoung-Beom;Kim, Jae-Kyung;Ko, Il-Ju;Jang, Dae-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.65-74
    • /
    • 2009
  • Scene change detection is pretreatment to index and search video information in video search system, and it is very important technology for overall performance. Existing scene change detection used single characteristic of pixel value difference, histogram difference, etc or mixed single characteristics that have complementary relationship. However, accuracy of those researches is very poor for special video such as infrared camera, night shooting. Therefore, this paper is proposed the method that is mixed color histogram and at algorithm for scene change detection at the specific movie. To verify the usefulness of a proposed method, we did an experiment which used color histogram only and KLT algorithm with color histogram. In result, evaluation index of proposed method is improved about 11.4% at the specific movie.

A Study on Gesture Interface through User Experience (사용자 경험을 통한 제스처 인터페이스에 관한 연구)

  • Yoon, Ki Tae;Cho, Eel Hea;Lee, Jooyoup
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.839-849
    • /
    • 2017
  • Recently, the role of the kitchen has evolved from the space for previous survival to the space that shows the present life and culture. Along with these changes, the use of IoT technology is spreading. As a result, the development and diffusion of new smart devices in the kitchen is being achieved. The user experience for using these smart devices is also becoming important. For a natural interaction between a user and a computer, better interactions can be expected based on context awareness. This paper examines the Natural User Interface (NUI) that does not touch the device based on the user interface (UI) of the smart device used in the kitchen. In this method, we use the image processing technology to recognize the user's hand gesture using the camera attached to the device and apply the recognized hand shape to the interface. The gestures used in this study are proposed to gesture according to the user's context and situation, and 5 kinds of gestures are classified and used in the interface.

Research of the Delivery Autonomy and Vision-based Landing Algorithm for Last-Mile Service using a UAV (무인기를 이용한 Last-Mile 서비스를 위한 배송 자동화 및 영상기반 착륙 알고리즘 연구)

  • Hanseob Lee;Hoon Jung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.160-167
    • /
    • 2023
  • This study focuses on the development of a Last-Mile delivery service using unmanned vehicles to deliver goods directly to the end consumer utilizing drones to perform autonomous delivery missions and an image-based precision landing algorithm for handoff to a robot in an intermediate facility. As the logistics market continues to grow rapidly, parcel volumes increase exponentially each year. However, due to low delivery fees, the workload of delivery personnel is increasing, resulting in a decrease in the quality of delivery services. To address this issue, the research team conducted a study on a Last-Mile delivery service using unmanned vehicles and conducted research on the necessary technologies for drone-based goods transportation in this paper. The flight scenario begins with the drone carrying the goods from a pickup location to the rooftop of a building where the final delivery destination is located. There is a handoff facility on the rooftop of the building, and a marker on the roof must be accurately landed upon. The mission is complete once the goods are delivered and the drone returns to its original location. The research team developed a mission planning algorithm to perform the above scenario automatically and constructed an algorithm to recognize the marker through a camera sensor and achieve a precision landing. The performance of the developed system has been verified through multiple trial operations within ETRI.