• Title/Summary/Keyword: Convolutional Network

Search Result 1,663, Processing Time 0.025 seconds

The Study on Effect of sEMG Sampling Frequency on Learning Performance in CNN based Finger Number Recognition (CNN 기반 한국 숫자지화 인식 응용에서 표면근전도 샘플링 주파수가 학습 성능에 미치는 영향에 관한 연구)

  • Gerelbat BatGerel;Chun-Ki Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.51-56
    • /
    • 2023
  • This study investigates the effect of sEMG sampling frequency on CNN learning performance at Korean finger number recognition application. Since the bigger sampling frequency of sEMG signals generates bigger size of input data and takes longer CNN's learning time. It makes making real-time system implementation more difficult and more costly. Thus, there might be appropriate sampling frequency when collecting sEMG signals. To this end, this work choose five different sampling frequencies which are 1,024Hz, 512Hz, 256Hz, 128Hz and 64Hz and investigates CNN learning performance with sEMG data taken at each sampling frequency. The comparative study shows that all CNN recognized Korean finger number one to five at the accuracy of 100% and CNN with sEMG signals collected at 256Hz sampling frequency takes the shortest learning time to reach the epoch at which korean finger number gestures are recognized at the accuracy of 100%.

Diagnostic Performance of a New Convolutional Neural Network Algorithm for Detecting Developmental Dysplasia of the Hip on Anteroposterior Radiographs

  • Hyoung Suk Park;Kiwan Jeon;Yeon Jin Cho;Se Woo Kim;Seul Bi Lee;Gayoung Choi;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon;Woo Sun Kim;Young Jin Ryu;Jae-Yeon Hwang
    • Korean Journal of Radiology
    • /
    • v.22 no.4
    • /
    • pp.612-623
    • /
    • 2021
  • Objective: To evaluate the diagnostic performance of a deep learning algorithm for the automated detection of developmental dysplasia of the hip (DDH) on anteroposterior (AP) radiographs. Materials and Methods: Of 2601 hip AP radiographs, 5076 cropped unilateral hip joint images were used to construct a dataset that was further divided into training (80%), validation (10%), or test sets (10%). Three radiologists were asked to label the hip images as normal or DDH. To investigate the diagnostic performance of the deep learning algorithm, we calculated the receiver operating characteristics (ROC), precision-recall curve (PRC) plots, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) and compared them with the performance of radiologists with different levels of experience. Results: The area under the ROC plot generated by the deep learning algorithm and radiologists was 0.988 and 0.988-0.919, respectively. The area under the PRC plot generated by the deep learning algorithm and radiologists was 0.973 and 0.618-0.958, respectively. The sensitivity, specificity, PPV, and NPV of the proposed deep learning algorithm were 98.0, 98.1, 84.5, and 99.8%, respectively. There was no significant difference in the diagnosis of DDH by the algorithm and the radiologist with experience in pediatric radiology (p = 0.180). However, the proposed model showed higher sensitivity, specificity, and PPV, compared to the radiologist without experience in pediatric radiology (p < 0.001). Conclusion: The proposed deep learning algorithm provided an accurate diagnosis of DDH on hip radiographs, which was comparable to the diagnosis by an experienced radiologist.

Clinical Validation of a Deep Learning-Based Hybrid (Greulich-Pyle and Modified Tanner-Whitehouse) Method for Bone Age Assessment

  • Kyu-Chong Lee;Kee-Hyoung Lee;Chang Ho Kang;Kyung-Sik Ahn;Lindsey Yoojin Chung;Jae-Joon Lee;Suk Joo Hong;Baek Hyun Kim;Euddeum Shim
    • Korean Journal of Radiology
    • /
    • v.22 no.12
    • /
    • pp.2017-2025
    • /
    • 2021
  • Objective: To evaluate the accuracy and clinical efficacy of a hybrid Greulich-Pyle (GP) and modified Tanner-Whitehouse (TW) artificial intelligence (AI) model for bone age assessment. Materials and Methods: A deep learning-based model was trained on an open dataset of multiple ethnicities. A total of 102 hand radiographs (51 male and 51 female; mean age ± standard deviation = 10.95 ± 2.37 years) from a single institution were selected for external validation. Three human experts performed bone age assessments based on the GP atlas to develop a reference standard. Two study radiologists performed bone age assessments with and without AI model assistance in two separate sessions, for which the reading time was recorded. The performance of the AI software was assessed by comparing the mean absolute difference between the AI-calculated bone age and the reference standard. The reading time was compared between reading with and without AI using a paired t test. Furthermore, the reliability between the two study radiologists' bone age assessments was assessed using intraclass correlation coefficients (ICCs), and the results were compared between reading with and without AI. Results: The bone ages assessed by the experts and the AI model were not significantly different (11.39 ± 2.74 years and 11.35 ± 2.76 years, respectively, p = 0.31). The mean absolute difference was 0.39 years (95% confidence interval, 0.33-0.45 years) between the automated AI assessment and the reference standard. The mean reading time of the two study radiologists was reduced from 54.29 to 35.37 seconds with AI model assistance (p < 0.001). The ICC of the two study radiologists slightly increased with AI model assistance (from 0.945 to 0.990). Conclusion: The proposed AI model was accurate for assessing bone age. Furthermore, this model appeared to enhance the clinical efficacy by reducing the reading time and improving the inter-observer reliability.

Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning (연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현)

  • Youngjun Kim;Taewan Kim;Suhyun Kim;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

Object Pose Estimation and Motion Planning for Service Automation System (서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획)

  • Youngwoo Kwon;Dongyoung Lee;Hosun Kang;Jiwook Choi;Inho Lee
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

A comparison of ATR-FTIR and Raman spectroscopy for the non-destructive examination of terpenoids in medicinal plants essential oils

  • Rahul Joshi;Sushma Kholiya;Himanshu Pandey;Ritu Joshi;Omia Emmanuel;Ameeta Tewari;Taehyun Kim;Byoung-Kwan Cho
    • Korean Journal of Agricultural Science
    • /
    • v.50 no.4
    • /
    • pp.675-696
    • /
    • 2023
  • Terpenoids, also referred to as terpenes, are a large family of naturally occurring chemical compounds present in the essential oils extracted from medicinal plants. In this study, a nondestructive methodology was created by combining ATR-FT-IR (attenuated total reflectance-Fourier transform infrared), and Raman spectroscopy for the terpenoids assessment in medicinal plants essential oils from ten different geographical locations. Partial least squares regression (PLSR) and support vector regression (SVR) were used as machine learning methodologies. However, a deep learning based model called as one-dimensional convolutional neural network (1D CNN) were also developed for models comparison. With a correlation coefficient (R2) of 0.999 and a lowest RMSEP (root mean squared error of prediction) of 0.006% for the prediction datasets, the SVR model created for FT-IR spectral data outperformed both the PLSR and 1 D CNN models. On the other hand, for the classification of essential oils derived from plants collected from various geographical regions, the created SVM (support vector machine) classification model for Raman spectroscopic data obtained an overall classification accuracy of 0.997% which was superior than the FT-IR (0.986%) data. Based on the results we propose that FT-IR spectroscopy, when coupled with the SVR model, has a significant potential for the non-destructive identification of terpenoids in essential oils compared with destructive chemical analysis methods.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Implementation of Secondhand Clothing Trading System with Deep Learning-Based Virtual Fitting Functionality (딥러닝 기반 가상 피팅 기능을 갖는 중고 의류 거래 시스템 구현)

  • Inhwan Jung;Kitae Hwang;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.17-22
    • /
    • 2024
  • This paper introduces the implementation of a secondhand clothing trading system equipped with virtual fitting functionality based on deep learning. The proposed system provides users with the ability to visually try on secondhand clothing items online and assess their fit. To achieve this, it utilizes the Convolutional Neural Network (CNN) algorithm to create virtual representations of users considering their body shape and the design of the clothing. This enables buyers to pre-assess the fit of clothing items online before actually wearing them, thereby aiding in their purchase decisions. Additionally, sellers can present accurate clothing sizes and fits through the system, enhancing customer satisfaction. This paper delves into the CNN model's training process, system implementation, user feedback, and validates the effectiveness of the proposed system through experimental results.

Classification of Aβ State From Brain Amyloid PET Images Using Machine Learning Algorithm

  • Chanda Simfukwe;Reeree Lee;Young Chul Youn;Alzheimer’s Disease and Related Dementias in Zambia (ADDIZ) Group
    • Dementia and Neurocognitive Disorders
    • /
    • v.22 no.2
    • /
    • pp.61-68
    • /
    • 2023
  • Background and Purpose: Analyzing brain amyloid positron emission tomography (PET) images to access the occurrence of β-amyloid (Aβ) deposition in Alzheimer's patients requires much time and effort from physicians, while the variation of each interpreter may differ. For these reasons, a machine learning model was developed using a convolutional neural network (CNN) as an objective decision to classify the Aβ positive and Aβ negative status from brain amyloid PET images. Methods: A total of 7,344 PET images of 144 subjects were used in this study. The 18F-florbetaben PET was administered to all participants, and the criteria for differentiating Aβ positive and Aβ negative state was based on brain amyloid plaque load score (BAPL) that depended on the visual assessment of PET images by the physicians. We applied the CNN algorithm trained in batches of 51 PET images per subject directory from 2 classes: Aβ positive and Aβ negative states, based on the BAPL scores. Results: The binary classification of the model average performance matrices was evaluated after 40 epochs of three trials based on test datasets. The model accuracy for classifying Aβ positivity and Aβ negativity was (95.00±0.02) in the test dataset. The sensitivity and specificity were (96.00±0.02) and (94.00±0.02), respectively, with an area under the curve of (87.00±0.03). Conclusions: Based on this study, the designed CNN model has the potential to be used clinically to screen amyloid PET images.

Assessment of a Deep Learning Algorithm for the Detection of Rib Fractures on Whole-Body Trauma Computed Tomography

  • Thomas Weikert;Luca Andre Noordtzij;Jens Bremerich;Bram Stieltjes;Victor Parmar;Joshy Cyriac;Gregor Sommer;Alexander Walter Sauter
    • Korean Journal of Radiology
    • /
    • v.21 no.7
    • /
    • pp.891-899
    • /
    • 2020
  • Objective: To assess the diagnostic performance of a deep learning-based algorithm for automated detection of acute and chronic rib fractures on whole-body trauma CT. Materials and Methods: We retrospectively identified all whole-body trauma CT scans referred from the emergency department of our hospital from January to December 2018 (n = 511). Scans were categorized as positive (n = 159) or negative (n = 352) for rib fractures according to the clinically approved written CT reports, which served as the index test. The bone kernel series (1.5-mm slice thickness) served as an input for a detection prototype algorithm trained to detect both acute and chronic rib fractures based on a deep convolutional neural network. It had previously been trained on an independent sample from eight other institutions (n = 11455). Results: All CTs except one were successfully processed (510/511). The algorithm achieved a sensitivity of 87.4% and specificity of 91.5% on a per-examination level [per CT scan: rib fracture(s): yes/no]. There were 0.16 false-positives per examination (= 81/510). On a per-finding level, there were 587 true-positive findings (sensitivity: 65.7%) and 307 false-negatives. Furthermore, 97 true rib fractures were detected that were not mentioned in the written CT reports. A major factor associated with correct detection was displacement. Conclusion: We found good performance of a deep learning-based prototype algorithm detecting rib fractures on trauma CT on a per-examination level at a low rate of false-positives per case. A potential area for clinical application is its use as a screening tool to avoid false-negative radiology reports.