• Title/Summary/Keyword: Real-time classification

Search Result 716, Processing Time 0.021 seconds

Hierarchical Neural Network for Real-time Medicine-bottle Classification (실시간 약통 분류를 위한 계층적 신경회로망)

  • Kim, Jung-Joon;Kim, Tae-Hun;Ryu, Gang-Soo;Lee, Dae-Sik;Lee, Jong-Hak;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.226-231
    • /
    • 2013
  • In The matching algorithm for automatic packaging of drugs is essential to determine whether the canister can exactly refill the suitable medicine. In this paper, we propose a hierarchical neural network with the upper and lower layers which can perform real-time processing and classification of many types of medicine bottles to prevent accidental medicine disaster. A few number of low-dimensional feature vector are extracted from the label images presenting medicine-bottle information. By using the extracted feature vectors, the lower layer of MLP(Multi-layer Perceptron) neural networks is learned. Then, the output of the learned middle layer of the MLP is used as the input to the upper layer of the MLP learning. The proposed hierarchical neural network shows good classification performance and real- time operation in the test of up to 30 degrees rotated to the left and right images of 100 different medicine bottles.

A Study on The Classification of Target-objects with The Deep-learning Model in The Vision-images (딥러닝 모델을 이용한 비전이미지 내의 대상체 분류에 관한 연구)

  • Cho, Youngjoon;Kim, Jongwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.20-25
    • /
    • 2021
  • The target-object classification method was implemented using a deep-learning-based detection model in real-time images. The object detection model was a deep-learning-based detection model that allowed extensive data collection and machine learning processes to classify similar target-objects. The recognition model was implemented by changing the processing structure of the detection model and combining developed the vision-processing module. To classify the target-objects, the identity and similarity were defined and applied to the detection model. The use of the recognition model in industry was also considered by verifying the effectiveness of the recognition model using the real-time images of an actual soccer game. The detection model and the newly constructed recognition model were compared and verified using real-time images. Furthermore, research was conducted to optimize the recognition model in a real-time environment.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Development of a Deep Learning Algorithm for Small Object Detection in Real-Time (실시간 기반 매우 작은 객체 탐지를 위한 딥러닝 알고리즘 개발)

  • Wooseong Yeo;Meeyoung Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.4_2
    • /
    • pp.1001-1007
    • /
    • 2024
  • Recent deep learning algorithms for object detection in real-time play a crucial role in various applications such as autonomous driving, traffic monitoring, health care, and water quality monitoring. The size of small objects, in particular, significantly impacts the accuracy of detection models. However, data containing small objects can lead to underfitting issues in models. Therefore, this study developed a deep learning model capable of quickly detecting small objects to provide more accurate predictions. The RE-SOD (Residual block based Small Object Detector) developed in this research enhances the detection performance for small objects by using RGB separation preprocessing and residual blocks. The model achieved an accuracy of 1.0 in image classification and an mAP50-95 score of 0.944 in object detection. The performance of this model was validated by comparing it with real-time detection models such as YOLOv5, YOLOv7, and YOLOv8.

Obstacle Classification Method using Multi Feature Comparison Based on Single 2D LiDAR (단일 2차원 라이다 기반의 다중 특징 비교를 이용한 장애물 분류 기법)

  • Lee, Moohyun;Hur, Soojung;Park, Yongwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.4
    • /
    • pp.253-265
    • /
    • 2016
  • We propose an obstacle classification method using multi-decision factors and decision sections based on Single 2D LiDAR. The existing obstacle classification method based on single 2D LiDAR has two specific advantages: accuracy and decreased calculation time. However, it was difficult to classify obstacle type, and therefore accurate path planning was not possible. To overcome this problem, a method of classifying obstacle type based on width data was proposed. However, width data was not sufficient to enable accurate obstacle classification. The proposed algorithm of this paper involves the comparison between decision factor and decision section to classify obstacle type. Decision factor and decision section was determined using width, standard deviation of distance, average normalized intensity, and standard deviation of normalized intensity data. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 2D LiDAR-based method, thus demonstrating the possibility of obstacle type classification using single 2D LiDAR.

Night-time Vehicle Detection Method Using Convolutional Neural Network (합성곱 신경망 기반 야간 차량 검출 방법)

  • Park, Woong-Kyu;Choi, Yeongyu;KIM, Hyun-Koo;Choi, Gyu-Sang;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.2
    • /
    • pp.113-120
    • /
    • 2017
  • In this paper, we present a night-time vehicle detection method using CNN (Convolutional Neural Network) classification. The camera based night-time vehicle detection plays an important role on various advanced driver assistance systems (ADAS) such as automatic head-lamp control system. The method consists mainly of thresholding, labeling and classification steps. The classification step is implemented by existing CIFAR-10 model CNN. Through the simulations tested on real road video, we show that CNN classification is a good alternative for night-time vehicle detection.

Real-Time Automated Cardiac Health Monitoring by Combination of Active Learning and Adaptive Feature Selection

  • Bashir, Mohamed Ezzeldin A.;Shon, Ho Sun;Lee, Dong Gyu;Kim, Hyeongsoo;Ryu, Keun Ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.99-118
    • /
    • 2013
  • Electrocardiograms (ECGs) are widely used by clinicians to identify the functional status of the heart. Thus, there is considerable interest in automated systems for real-time monitoring of arrhythmia. However, intra- and inter-patient variability as well as the computational limits of real-time monitoring poses significant challenges for practical implementations. The former requires that the classification model be adjusted continuously, and the latter requires a reduction in the number and types of ECG features, and thus, the computational burden, necessary to classify different arrhythmias. We propose the use of adaptive learning to automatically train the classifier on up-to-date ECG data, and employ adaptive feature selection to define unique feature subsets pertinent to different types of arrhythmia. Experimental results show that this hybrid technique outperforms conventional approaches and is therefore a promising new intelligent diagnostic tool.

Real-Time Object Segmentation in Image Sequences (연속 영상 기반 실시간 객체 분할)

  • Kang, Eui-Seon;Yoo, Seung-Hun
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.173-180
    • /
    • 2011
  • This paper shows an approach for real-time object segmentation on GPU (Graphics Processing Unit) using CUDA (Compute Unified Device Architecture). Recently, many applications that is monitoring system, motion analysis, object tracking or etc require real-time processing. It is not suitable for object segmentation to procedure real-time in CPU. NVIDIA provide CUDA platform for Parallel Processing for General Computation to upgrade limit of Hardware Graphic. In this paper, we use adaptive Gaussian Mixture Background Modeling in the step of object extraction and CCL(Connected Component Labeling) for classification. The speed of GPU and CPU is compared and evaluated with implementation in Core2 Quad processor with 2.4GHz.The GPU version achieved a speedup of 3x-4x over the CPU version.

A Machine Learning-based Real-time Monitoring System for Classification of Elephant Flows on KOREN

  • Akbar, Waleed;Rivera, Javier J.D.;Ahmed, Khan T.;Muhammad, Afaq;Song, Wang-Cheol
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2801-2815
    • /
    • 2022
  • With the advent and realization of Software Defined Network (SDN) architecture, many organizations are now shifting towards this paradigm. SDN brings more control, higher scalability, and serene elasticity. The SDN spontaneously changes the network configuration according to the dynamic network requirements inside the constrained environments. Therefore, a monitoring system that can monitor the physical and virtual entities is needed to operate this type of network technology with high efficiency and proficiency. In this manuscript, we propose a real-time monitoring system for data collection and visualization that includes the Prometheus, node exporter, and Grafana. A node exporter is configured on the physical devices to collect the physical and virtual entities resources utilization logs. A real-time Prometheus database is configured to collect and store the data from all the exporters. Furthermore, the Grafana is affixed with Prometheus to visualize the current network status and device provisioning. A monitoring system is deployed on the physical infrastructure of the KOREN topology. Data collected by the monitoring system is further pre-processed and restructured into a dataset. A monitoring system is further enhanced by including machine learning techniques applied on the formatted datasets to identify the elephant flows. Additionally, a Random Forest is trained on our generated labeled datasets, and the classification models' performance are verified using accuracy metrics.

Integrated GUI Environment of Parallel Fuzzy Inference System for Pattern Classification of Remote Sensing Images

  • Lee, Seong-Hoon;Lee, Sang-Gu;Son, Ki-Sung;Kim, Jong-Hyuk;Lee, Byung-Kwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.133-138
    • /
    • 2002
  • In this paper, we propose an integrated GUI environment of parallel fuzzy inference system fur pattern classification of remote sensing data. In this, as 4 fuzzy variables in condition part and 104 fuzzy rules are used, a real time and parallel approach is required. For frost fuzzy computation, we use the scan line conversion algorithm to convert lines of each fuzzy linguistic term to the closest integer pixels. We design 4 fuzzy processor unit to be operated in parallel by using FPGA. As a GUI environment, PCI transmission, image data pre-processing, integer pixel mapping and fuzzy membership tuning are considered. This system can be used in a pattern classification system requiring a rapid inference time in a real-time.