• Title/Summary/Keyword: YOLOv10

Search Result 268, Processing Time 0.028 seconds

AB9: A neural processor for inference acceleration

  • Cho, Yong Cheol Peter;Chung, Jaehoon;Yang, Jeongmin;Lyuh, Chun-Gi;Kim, HyunMi;Kim, Chan;Ham, Je-seok;Choi, Minseok;Shin, Kyoungseon;Han, Jinho;Kwon, Youngsu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.491-504
    • /
    • 2020
  • We present AB9, a neural processor for inference acceleration. AB9 consists of a systolic tensor core (STC) neural network accelerator designed to accelerate artificial intelligence applications by exploiting the data reuse and parallelism characteristics inherent in neural networks while providing fast access to large on-chip memory. Complementing the hardware is an intuitive and user-friendly development environment that includes a simulator and an implementation flow that provides a high degree of programmability with a short development time. Along with a 40-TFLOP STC that includes 32k arithmetic units and over 36 MB of on-chip SRAM, our baseline implementation of AB9 consists of a 1-GHz quad-core setup with other various industry-standard peripheral intellectual properties. The acceleration performance and power efficiency were evaluated using YOLOv2, and the results show that AB9 has superior performance and power efficiency to that of a general-purpose graphics processing unit implementation. AB9 has been taped out in the TSMC 28-nm process with a chip size of 17 × 23 ㎟. Delivery is expected later this year.

Object detection and distance measurement system with sensor fusion (센서 융합을 통한 물체 거리 측정 및 인식 시스템)

  • Lee, Tae-Min;Kim, Jung-Hwan;Lim, Joonhong
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.232-237
    • /
    • 2020
  • In this paper, we propose an efficient sensor fusion method for autonomous vehicle recognition and distance measurement. Typical sensors used in autonomous vehicles are radar, lidar and camera. Among these, the lidar sensor is used to create a map around the vehicle. This has the disadvantage, however, of poor performance in weather conditions and the high cost of the sensor. In this paper, to compensate for these shortcomings, the distance is measured with a radar sensor that is relatively inexpensive and free of snow, rain and fog. The camera sensor with excellent object recognition rate is fused to measure object distance. The converged video is transmitted to a smartphone in real time through an IP server and can be used for an autonomous driving assistance system that determines the current vehicle situation from inside and outside.

Identifying Process Capability Index for Electricity Distribution System through Thermal Image Analysis (열화상 이미지 분석을 통한 배전 설비 공정능력지수 감지 시스템 개발)

  • Lee, Hyung-Geun;Hong, Yong-Min;Kang, Sung-Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.3
    • /
    • pp.327-340
    • /
    • 2021
  • Purpose: The purpose of this study is to propose a system predicting whether an electricity distribution system is abnormal by analyzing the temperature of the deteriorated system. Traditional electricity distribution system abnormality diagnosis was mainly limited to post-inspection. This research presents a remote monitoring system for detecting thermal images of the deteriorated electricity distribution system efficiently hereby providing safe and efficient abnormal diagnosis to electricians. Methods: In this study, an object detection algorithm (YOLOv5) is performed using 16,866 thermal images of electricity distribution systems provided by KEPCO(Korea Electric Power Corporation). Abnormality/Normality of the extracted system images from the algorithm are classified via the limit temperature. Each classification model, Random Forest, Support Vector Machine, XGBOOST is performed to explore 463,053 temperature datasets. The process capability index is employed to indicate the quality of the electricity distribution system. Results: This research performs case study with transformers representing the electricity distribution systems. The case study shows the following states: accuracy 100%, precision 100%, recall 100%, F1-score 100%. Also the case study shows the process capability index of the transformers with the following states: steady state 99.47%, caution state 0.16%, and risk state 0.37%. Conclusion: The sum of caution and risk state is 0.53%, which is higher than the actual failure rate. Also most transformer abnormalities can be detected through this monitoring system.

A Study on Establishment Method of Smart Factory Dataset for Artificial Intelligence (인공지능형 스마트공장 데이터셋 구축 방법에 관한 연구)

  • Park, Youn-Soo;Lee, Sang-Deok;Choi, Jeong-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.203-208
    • /
    • 2021
  • At the manufacturing site, workers have been operating by inputting materials into the manufacturing process and leaving input records according to the work instructions, but product LOT tracking has been not possible due to many omissions. Recently, it is being carried out as a system to automatically input materials using RFID-Tag. In particular, the initial automatic recognition rate was good at 97 percent by automatically generating input information through RACK (TAG) ID and RACK input time analysis, but the automatic recognition rate continues to decrease due to multi-material RACK, TAG loss, and new product input issues. It is expected that it will contribute to increasing speed and yield (normal product ratio) in the overall production process by improving automatic recognition rate and real-time monitoring through the establishment of artificial intelligent smart factory datasets.

Field Applicability Study of Hull Crack Detection Based on Artificial Intelligence (인공지능 기반 선체 균열 탐지 현장 적용성 연구)

  • Song, Sang-ho;Lee, Gap-heon;Han, Ki-min;Jang, Hwa-sup
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.59 no.4
    • /
    • pp.192-199
    • /
    • 2022
  • With the advent of autonomous ships, it is emerging as one of the very important issues not only to operate with a minimum crew or unmanned ships, but also to secure the safety of ships to prevent marine accidents. On-site inspection of the hull is mainly performed by the inspector's visual inspection, and video information is recorded using a small camera if necessary. However, due to the shortage of inspection personnel, time and space constraints, and the pandemic situation, the necessity of introducing an automated inspection system using artificial intelligence and remote inspection is becoming more important. Furthermore, research on hardware and software that enables the automated inspection system to operate normally even under the harsh environmental conditions of a ship is absolutely necessary. For automated inspection systems, it is important to review artificial intelligence technologies and equipment that can perform a variety of hull failure detection and classification. To address this, it is important to classify the hull failure. Based on various guidelines and expert opinions, we divided them into 6 types(Crack, Corrosion, Pitting, Deformation, Indent, Others). It was decided to apply object detection technology to cracks of hull failure. After that, YOLOv5 was decided as an artificial intelligence model suitable for survey and a common hull crack dataset was trained. Based on the performance results, it aims to present the possibility of applying artificial intelligence in the field by determining and testing the equipment required for survey.

Lunar Crater Detection using Deep-Learning (딥러닝을 이용한 달 크레이터 탐지)

  • Seo, Haingja;Kim, Dongyoung;Park, Sang-Min;Choi, Myungjin
    • Journal of Space Technology and Applications
    • /
    • v.1 no.1
    • /
    • pp.49-63
    • /
    • 2021
  • The exploration of the solar system is carried out through various payloads, and accordingly, many research results are emerging. We tried to apply deep-learning as a method of studying the bodies of solar system. Unlike Earth observation satellite data, the data of solar system differ greatly from celestial bodies to probes and to payloads of each probe. Therefore, it may be difficult to apply it to various data with the deep-learning model, but we expect that it will be able to reduce human errors or compensate for missing parts. We have implemented a model that detects craters on the lunar surface. A model was created using the Lunar Reconnaissance Orbiter Camera (LROC) image and the provided shapefile as input values, and applied to the lunar surface image. Although the result was not satisfactory, it will be applied to the image of the permanently shadow regions of the Moon, which is finally acquired by ShadowCam through image pre-processing and model modification. In addition, by attempting to apply it to Ceres and Mercury, which have similar the lunar surface, it is intended to suggest that deep-learning is another method for the study of the solar system.

A Study on the Dataset Construction and Model Application for Detecting Surgical Gauze in C-Arm Imaging Using Artificial Intelligence (인공지능을 활용한 C-Arm에서 수술용 거즈 검출을 위한 데이터셋 구축 및 검출모델 적용에 관한 연구)

  • Kim, Jin Yeop;Hwang, Ho Seong;Lee, Joo Byung;Choi, Yong Jin;Lee, Kang Seok;Kim, Ho Chul
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.290-297
    • /
    • 2022
  • During surgery, Surgical instruments are often left behind due to accidents. Most of these are surgical gauze, so radioactive non-permeable gauze (X-ray gauze) is used for preventing of accidents which gauze is left in the body. This gauze is divided into wire and pad type. If it is confirmed that the gauze remains in the body, gauze must be detected by radiologist's reading by imaging using a mobile X-ray device. But most of operating rooms are not equipped with a mobile X-ray device, but equipped C-Arm equipment, which is of poorer quality than mobile X-ray equipment and furthermore it takes time to read them. In this study, Use C-Arm equipment to acquire gauze image for detection and Build dataset using artificial intelligence and select a detection model to Assist with the relatively low image quality and the reading of radiology specialists. mAP@50 and detection time are used as indicators for performance evaluation. The result is that two-class gauze detection dataset is more accurate and YOLOv5 model mAP@50 is 93.4% and detection time is 11.7 ms.

Object detection within the region of interest based on gaze estimation (응시점 추정 기반 관심 영역 내 객체 탐지)

  • Seok-Ho Han;Hoon-Seok Jang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.117-122
    • /
    • 2023
  • Gaze estimation, which automatically recognizes where a user is currently staring, and object detection based on estimated gaze point, can be a more accurate and efficient way to understand human visual behavior. in this paper, we propose a method to detect the objects within the region of interest around the gaze point. Specifically, after estimating the 3D gaze point, a region of interest based on the estimated gaze point is created to ensure that object detection occurs only within the region of interest. In our experiments, we compared the performance of general object detection, and the proposed object detection based on region of interest, and found that the processing time per frame was 1.4ms and 1.1ms, respectively, indicating that the proposed method was faster in terms of processing speed.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.

Artificial Intelligence Based LOS Determination for the Cyclists-Pedestrians Mixed Road Using Mobile Mapping System (인공지능 기반 MMS를 활용한 자전거보행자겸용도로 서비스 수준 산정)

  • Tae-Young Lee;Myung-Sik Do
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.3
    • /
    • pp.62-72
    • /
    • 2023
  • Recently, the importance of monitoring and management measures for bicycle road related facilities has been increasing. However, research on the monitoring and evaluation of users' safety and convenience in walking spaces including bicycle path is insufficient. In this study, we would like to construct health monitoring data for cylists-pedestrians mixed road using a mobile mapping system, and propose a plan to calculate the level of service of the mixed roads from the perspective of pedestrians and cyclists using artificial intelligence based object detection techniques. The monitoring and level of service calculation method of cylists-pedestrians mixed roads proposed in this study is expected to be used as basic information for planning and management such as maintenance and reconstruction of walking spaces in preparation for the increase of electric bicycles and personal mobility in the future.