• Title/Summary/Keyword: recognition-rate

Search Result 2,809, Processing Time 0.028 seconds

A Study on Road Traffic Volume Survey Using Vehicle Specification DB (자동차 제원 DB를 활용한 도로교통량 조사방안 연구)

  • Ji min Kim;Dong seob Oh
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.2
    • /
    • pp.93-104
    • /
    • 2023
  • Currently, the permanent road traffic volume surveys under Road Act are conducted using a intrusive Automatic Vehicle Classification (AVC) equipments to classify 12 categories of vehicles. However, intrusive AVC equipment inevitably have friction with vehicles, and physical damage to sensors due to cracks in roads, plastic deformation, and road construction decreases the operation rate. As a result, accuracy and reliability in actual operation are deteriorated, and maintenance costs are also increasing. With the recent development of ITS technology, research to replace the intrusive AVC equipment is being conducted. However multiple equipments or self-built DB operations were required to classify 12 categories of vehicles. Therefore, this study attempted to prepare a method for classifying 12 categories of vehicles using vehicle specification information of the Vehicle Management Information System(VMIS), which is collected and managed in accordance with Motor Vehicle Management Act. In the future, it is expected to be used to upgrade and diversify road traffic statistics using vehicle specifications such as the introduction of a road traffic survey system using Automatic Number Plate Recognition(ANPR) and classification of eco-friendly vehicles.

Deep Learning-based Object Detection of Panels Door Open in Underground Utility Tunnel (딥러닝 기반 지하공동구 제어반 문열림 인식)

  • Gyunghwan Kim;Jieun Kim;Woosug Jung
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.665-672
    • /
    • 2023
  • Purpose: Underground utility tunnel is facility that is jointly house infrastructure such as electricity, water and gas in city, causing condensation problems due to lack of airflow. This paper aims to prevent electricity leakage fires caused by condensation by detecting whether the control panel door in the underground utility tunnel is open using a deep learning model. Method: YOLO, a deep learning object recognition model, is trained to recognize the opening and closing of the control panel door using video data taken by a robot patrolling the underground utility tunnel. To improve the recognition rate, image augmentation is used. Result: Among the image enhancement techniques, we compared the performance of the YOLO model trained using mosaic with that of the YOLO model without mosaic, and found that the mosaic technique performed better. The mAP for all classes were 0.994, which is high evaluation result. Conclusion: It was able to detect the control panel even when there were lights off or other objects in the underground cavity. This allows you to effectively manage the underground utility tunnel and prevent disasters.

A Study on Performance Improvement of ConTracer Using Taguchi Method (다구찌법을 이용한 컨테이너화물 안전수송장치 ConTracer의 성능향상에 관한 연구)

  • Choi, Hyung-Rim;Kim, Jae-Joong;Kang, Moo-Hong;Shon, Jung-Rock;Shin, Joong-Jo;Lee, Ho-In;Kim, Gwang-Pil;Kim, Chae-Soo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.2
    • /
    • pp.23-31
    • /
    • 2009
  • Since 9.11 terrorist attacks against the USA, the new paradigm for "supply chain security" has been established. And at the same time a lot of researches are being made on supply chain security by many foreign companies or research institutes. However, domestically the terms "supply chain security" themselves are not yet familiar, and the paradigm of security are not being used in the logistics, while little researches are being made on them But recently along with development of "ConTracer," a supply chain security technology, which is to be used as the equipment for container cargo transportation safety based on RF1D technology, related researches have begun to be activated. The key issues for the development of equipment for container transportation safety are to obtain both a high recognition rate and enough recognition distance. To this end, this study has tested the ConTracer (433 MHz type and 2.4 GHz type) by using Taguchi Method. According to our test results, in the case of 433 MHz type, it is a little more effective that the reader faces to the front-right side, and in the case of 2.4 GHz, reader direction does not make difference in the view of sensitivity. The test also has proved that it is better that antenna location, as expected, is to be installed on the outside for both types alike.

A Study on the Accuracy Comparison of Object Detection Algorithms for 360° Camera Images for BIM Model Utilization (BIM 모델 활용을 위한 360° 카메라 이미지의 객체 탐지 알고리즘 정확성 비교 연구)

  • Hyun-Chul Joo;Ju-Hyeong Lee;Jong-Won Lim;Jae-Hee Lee;Leen-Seok Kang
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.145-155
    • /
    • 2023
  • Recently, with the widespread adoption of Building Information Modeling (BIM) technology in the construction industry, various object detection algorithms have been used to verify errors between 3D models and actual construction elements. Since the characteristics of objects vary depending on the type of construction facility, such as buildings, bridges, and tunnels, appropriate methods for object detection technology need to be employed. Additionally, for object detection, initial object images are required, and to obtain these, various methods, such as drones and smartphones, can be used for image acquisition. The study uses a 360° camera optimized for internal tunnel imaging to capture initial images of the tunnel structures of railway and road facilities. Various object detection methodologies including the YOLO, SSD, and R-CNN algorithms are applied to detect actual objects from the captured images. And the Faster R-CNN algorithm had a higher recognition rate and mAP value than the SSD and YOLO v5 algorithms, and the difference between the minimum and maximum values of the recognition rates was small, showing equal detection ability. Considering the increasing adoption of BIM in current railway and road construction projects, this research highlights the potential utilization of 360° cameras and object detection methodologies for tunnel facility sections, aiming to expand their application in maintenance.

Design of CNN-based Braille Conversion and Voice Output Device for the Blind (시각장애인을 위한 CNN 기반의 점자 변환 및 음성 출력 장치 설계)

  • Seung-Bin Park;Bong-Hyun Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.87-92
    • /
    • 2023
  • As times develop, information becomes more diverse and methods of obtaining it become more diverse. About 80% of the amount of information gained in life is acquired through the visual sense. However, visually impaired people have limited ability to interpret visual materials. That's why Braille, a text for the blind, appeared. However, the Braille decoding rate of the blind is only 5%, and as the demand of the blind who want various forms of platforms or materials increases over time, development and product production for the blind are taking place. An example of product production is braille books, which seem to have more disadvantages than advantages, and unlike non-disabled people, it is true that access to information is still very difficult. In this paper, we designed a CNN-based Braille conversion and voice output device to make it easier for visually impaired people to obtain information than conventional methods. The device aims to improve the quality of life by allowing books, text images, or handwritten images that are not made in Braille to be converted into Braille through camera recognition, and designing a function that can be converted into voice according to the needs of the blind.

A Study on the Real-time Recognition Methodology for IoT-based Traffic Accidents (IoT 기반 교통사고 실시간 인지방법론 연구)

  • Oh, Sung Hoon;Jeon, Young Jun;Kwon, Young Woo;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.15-27
    • /
    • 2022
  • In the past five years, the fatality rate of single-vehicle accidents has been 4.7 times higher than that of all accidents, so it is necessary to establish a system that can detect and respond to single-vehicle accidents immediately. The IoT(Internet of Thing)-based real-time traffic accident recognition system proposed in this study is as following. By attaching an IoT sensor which detects the impact and vehicle ingress to the guardrail, when an impact occurs to the guardrail, the image of the accident site is analyzed through artificial intelligence technology and transmitted to a rescue organization to perform quick rescue operations to damage minimization. An IoT sensor module that recognizes vehicles entering the monitoring area and detects the impact of a guardrail and an AI-based object detection module based on vehicle image data learning were implemented. In addition, a monitoring and operation module that imanages sensor information and image data in integrate was also implemented. For the validation of the system, it was confirmed that the target values were all met by measuring the shock detection transmission speed, the object detection accuracy of vehicles and people, and the sensor failure detection accuracy. In the future, we plan to apply it to actual roads to verify the validity using real data and to commercialize it. This system will contribute to improving road safety.

A Study on the Impact of AI Edge Computing Technology on Reducing Traffic Accidents at Non-signalized Intersections on Residential Road (이면도로 비신호교차로에서 AI 기반 엣지컴퓨팅 기술이 교통사고 감소에 미치는 영향에 관한 연구)

  • Young-Gyu Jang;Gyeong-Seok Kim;Hye-Weon Kim;Won-Ho Cho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.2
    • /
    • pp.79-88
    • /
    • 2024
  • We used actual field data to analyze from a traffic engineering perspective how AI and edge computing technologies affect the reduction of traffic accidents. By providing object information from 20m behind with AI object recognition, the driver secures a response time of about 3.6 seconds, and with edge technology, information is displayed in 0.5 to 0.8 seconds, giving the driver time to respond to intersection situations. In addition, it was analyzed that stopping before entering the intersection is possible when speed is controlled at 11-12km at the 10m point of the intersection approach and 20km/h at the 20m point. As a result, it was shown that traffic accidents can be reduced when the high object recognition rate of AI technology, provision of real-time information by edge technology, and the appropriate speed management at intersection approaches are executed simultaneously.

Research on APC Verification for Disaster Victims and Vulnerable Facilities (재난약자 및 취약시설에 대한 APC실증에 관한 연구)

  • Seungyong Kim;Incheol Hwang;Dongsik Kim;Jungjae Shin;Seunggap Yong
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.199-205
    • /
    • 2024
  • Purpose: This study aims to improve the recognition rate of Auto People Counting (APC) in accurately identifying and providing information on remaining evacuees in disaster-vulnerable facilities such as nursing homes to firefighting and other response agencies in the event of a disaster. Methods: In this study, a baseline model was established using CNN (Convolutional Neural Network) models to improve the algorithm for recognizing images of incoming and outgoing individuals through cameras installed in actual disaster-vulnerable facilities operating APC systems. Various algorithms were analyzed, and the top seven candidates were selected. The research was conducted by utilizing transfer learning models to select the optimal algorithm with the best performance. Results: Experiment results confirmed the precision and recall of Densenet201 and Resnet152v2 models, which exhibited the best performance in terms of time and accuracy. It was observed that both models demonstrated 100% accuracy for all labels, with Densenet201 model showing superior performance. Conclusion: The optimal algorithm applicable to APC among various artificial intelligence algorithms was selected. Further research on algorithm analysis and learning is required to accurately identify the incoming and outgoing individuals in disaster-vulnerable facilities in various disaster situations such as emergencies in the future.

Research on Deep Learning-Based Methods for Determining Negligence through Traffic Accident Video Analysis (교통사고 영상 분석을 통한 과실 판단을 위한 딥러닝 기반 방법 연구)

  • Seo-Young Lee;Yeon-Hwi You;Hyo-Gyeong Park;Byeong-Ju Park;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.559-565
    • /
    • 2024
  • Research on autonomous vehicles is being actively conducted. As autonomous vehicles emerge, there will be a transitional period in which traditional and autonomous vehicles coexist, potentially leading to a higher accident rate. Currently, when a traffic accident occurs, the fault ratio is determined according to the criteria set by the General Insurance Association of Korea. However, the time required to investigate the type of accident is substantial. Additionally, there is an increasing trend in fault ratio disputes, with requests for reconsideration even after the fault ratio has been determined. To reduce these temporal and material costs, we propose a deep learning model that automatically determines fault ratios. In this study, we aimed to determine fault ratios based on accident video through a image classification model based on ResNet-18 and video action recognition using TSN. If this model commercialized, could significantly reduce the time required to measure fault ratios. Moreover, it provides an objective metric for fault ratios that can be offered to the parties involved, potentially alleviating fault ratio disputes.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.