• Title/Summary/Keyword: Deep Learning Tool

Search Result 132, Processing Time 0.022 seconds

Assessment of a Deep Learning Algorithm for the Detection of Rib Fractures on Whole-Body Trauma Computed Tomography

  • Thomas Weikert;Luca Andre Noordtzij;Jens Bremerich;Bram Stieltjes;Victor Parmar;Joshy Cyriac;Gregor Sommer;Alexander Walter Sauter
    • Korean Journal of Radiology
    • /
    • v.21 no.7
    • /
    • pp.891-899
    • /
    • 2020
  • Objective: To assess the diagnostic performance of a deep learning-based algorithm for automated detection of acute and chronic rib fractures on whole-body trauma CT. Materials and Methods: We retrospectively identified all whole-body trauma CT scans referred from the emergency department of our hospital from January to December 2018 (n = 511). Scans were categorized as positive (n = 159) or negative (n = 352) for rib fractures according to the clinically approved written CT reports, which served as the index test. The bone kernel series (1.5-mm slice thickness) served as an input for a detection prototype algorithm trained to detect both acute and chronic rib fractures based on a deep convolutional neural network. It had previously been trained on an independent sample from eight other institutions (n = 11455). Results: All CTs except one were successfully processed (510/511). The algorithm achieved a sensitivity of 87.4% and specificity of 91.5% on a per-examination level [per CT scan: rib fracture(s): yes/no]. There were 0.16 false-positives per examination (= 81/510). On a per-finding level, there were 587 true-positive findings (sensitivity: 65.7%) and 307 false-negatives. Furthermore, 97 true rib fractures were detected that were not mentioned in the written CT reports. A major factor associated with correct detection was displacement. Conclusion: We found good performance of a deep learning-based prototype algorithm detecting rib fractures on trauma CT on a per-examination level at a low rate of false-positives per case. A potential area for clinical application is its use as a screening tool to avoid false-negative radiology reports.

Authorship Attribution of Web Texts with Korean Language Applying Deep Learning Method (딥러닝을 활용한 웹 텍스트 저자의 남녀 구분 및 연령 판별 : SNS 사용자를 중심으로)

  • Park, Chan Yub;Jang, In Ho;Lee, Zoon Ky
    • Journal of Information Technology Services
    • /
    • v.15 no.3
    • /
    • pp.147-155
    • /
    • 2016
  • According to rapid development of technology, web text is growing explosively and attracting many fields as substitution for survey. The user of Facebook is reaching up to 113 million people per month, Twitter is used in various institution or company as a behavioral analysis tool. However, many research has focused on meaning of the text itself. And there is a lack of study for text's creation subject. Therefore, this research consists of sex/age text classification with by using 20,187 Facebook users' posts that reveal the sex and age of the writer. This research utilized Convolution Neural Networks, a type of deep learning algorithms which came into the spotlight as a recent image classifier in web text analyzing. The following result assured with 92% of accuracy for possibility as a text classifier. Also, this research was minimizing the Korean morpheme analysis and it was conducted using a Korean web text to Authorship Attribution. Based on these feature, this study can develop users' multiple capacity such as web text management information resource for worker, non-grammatical analyzing system for researchers. Thus, this study proposes a new method for web text analysis.

A Novel Whale Optimized TGV-FCMS Segmentation with Modified LSTM Classification for Endometrium Cancer Prediction

  • T. Satya Kiranmai;P.V.Lakshmi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.53-64
    • /
    • 2023
  • Early detection of endometrial carcinoma in uterus is essential for effective treatment. Endometrial carcinoma is the worst kind of endometrium cancer among the others since it is considerably more likely to affect the additional parts of the body if not detected and treated early. Non-invasive medical computer vision, also known as medical image processing, is becoming increasingly essential in the clinical diagnosis of various diseases. Such techniques provide a tool for automatic image processing, allowing for an accurate and timely assessment of the lesion. One of the most difficult aspects of developing an effective automatic categorization system is the absence of huge datasets. Using image processing and deep learning, this article presented an artificial endometrium cancer diagnosis system. The processes in this study include gathering a dermoscopy images from the database, preprocessing, segmentation using hybrid Fuzzy C-Means (FCM) and optimizing the weights using the Whale Optimization Algorithm (WOA). The characteristics of the damaged endometrium cells are retrieved using the feature extraction approach after the Magnetic Resonance pictures have been segmented. The collected characteristics are classified using a deep learning-based methodology called Long Short-Term Memory (LSTM) and Bi-directional LSTM classifiers. After using the publicly accessible data set, suggested classifiers obtain an accuracy of 97% and segmentation accuracy of 93%.

A Study on the Development of a Tool to Support Classification of Strategic Items Using Deep Learning (딥러닝을 활용한 전략물자 판정 지원도구 개발에 대한 연구)

  • Cho, Jae-Young;Yoon, Ji-Won
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.967-973
    • /
    • 2020
  • As the implementation of export controls is spreading, the importance of classifying strategic items is increasing, but Korean export companies that are new to export controls are not able to understand the concept of strategic items, and it is difficult to classifying strategic items due to various criteria for controlling strategic items. In this paper, we propose a method that can easily approach the process of classification by lowering the barrier to entry for users who are new to export controls or users who are using classification of strategic items. If the user can confirm the decision result by providing a manual or a catalog for the procedure of classifying strategic items, it will be more convenient and easy to approach the method and procedure for classfying strategic items. In order to achieve the purpose of this study, it utilizes deep learning, which are being studied in image recognition and classification, and OCR(optical character reader) technology. And through the research and development of the support tool, we provide information that is helpful for the classification of strategic items to our companies.

A Novel, Deep Learning-Based, Automatic Photometric Analysis Software for Breast Aesthetic Scoring

  • Joseph Kyu-hyung Park;Seungchul Baek;Chan Yeong Heo;Jae Hoon Jeong;Yujin Myung
    • Archives of Plastic Surgery
    • /
    • v.51 no.1
    • /
    • pp.30-35
    • /
    • 2024
  • Background Breast aesthetics evaluation often relies on subjective assessments, leading to the need for objective, automated tools. We developed the Seoul Breast Esthetic Scoring Tool (S-BEST), a photometric analysis software that utilizes a DenseNet-264 deep learning model to automatically evaluate breast landmarks and asymmetry indices. Methods S-BEST was trained on a dataset of frontal breast photographs annotated with 30 specific landmarks, divided into an 80-20 training-validation split. The software requires the distances of sternal notch to nipple or nipple-to-nipple as input and performs image preprocessing steps, including ratio correction and 8-bit normalization. Breast asymmetry indices and centimeter-based measurements are provided as the output. The accuracy of S-BEST was validated using a paired t-test and Bland-Altman plots, comparing its measurements to those obtained from physical examinations of 100 females diagnosed with breast cancer. Results S-BEST demonstrated high accuracy in automatic landmark localization, with most distances showing no statistically significant difference compared with physical measurements. However, the nipple to inframammary fold distance showed a significant bias, with a coefficient of determination ranging from 0.3787 to 0.4234 for the left and right sides, respectively. Conclusion S-BEST provides a fast, reliable, and automated approach for breast aesthetic evaluation based on 2D frontal photographs. While limited by its inability to capture volumetric attributes or multiple viewpoints, it serves as an accessible tool for both clinical and research applications.

A Technical Analysis on Deep Learning based Image and Video Compression (딥 러닝 기반의 이미지와 비디오 압축 기술 분석)

  • Cho, Seunghyun;Kim, Younhee;Lim, Woong;Kim, Hui Yong;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.383-394
    • /
    • 2018
  • In this paper, we investigate image and video compression techniques based on deep learning which are actively studied recently. The deep learning based image compression technique inputs an image to be compressed in the deep neural network and extracts the latent vector recurrently or all at once and encodes it. In order to increase the image compression efficiency, the neural network is learned so that the encoded latent vector can be expressed with fewer bits while the quality of the reconstructed image is enhanced. These techniques can produce images of superior quality, especially at low bit rates compared to conventional image compression techniques. On the other hand, deep learning based video compression technology takes an approach to improve performance of the coding tools employed for existing video codecs rather than directly input and process the video to be compressed. The deep neural network technologies introduced in this paper replace the in-loop filter of the latest video codec or are used as an additional post-processing filter to improve the compression efficiency by improving the quality of the reconstructed image. Likewise, deep neural network techniques applied to intra prediction and encoding are used together with the existing intra prediction tool to improve the compression efficiency by increasing the prediction accuracy or adding a new intra coding process.

Dynamic Resource Adjustment Operator Based on Autoscaling for Improving Distributed Training Job Performance on Kubernetes (쿠버네티스에서 분산 학습 작업 성능 향상을 위한 오토스케일링 기반 동적 자원 조정 오퍼레이터)

  • Jeong, Jinwon;Yu, Heonchang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.7
    • /
    • pp.205-216
    • /
    • 2022
  • One of the many tools used for distributed deep learning training is Kubeflow, which runs on Kubernetes, a container orchestration tool. TensorFlow jobs can be managed using the existing operator provided by Kubeflow. However, when considering the distributed deep learning training jobs based on the parameter server architecture, the scheduling policy used by the existing operator does not consider the task affinity of the distributed training job and does not provide the ability to dynamically allocate or release resources. This can lead to long job completion time and low resource utilization rate. Therefore, in this paper we proposes a new operator that efficiently schedules distributed deep learning training jobs to minimize the job completion time and increase resource utilization rate. We implemented the new operator by modifying the existing operator and conducted experiments to evaluate its performance. The experiment results showed that our scheduling policy improved the average job completion time reduction rate of up to 84% and average CPU utilization increase rate of up to 92%.

Evaluation of deep learning and convolutional neural network algorithms for mandibular fracture detection using radiographic images: A systematic review and meta-analysis

  • Mahmood Dashti;Sahar Ghaedsharaf;Shohreh Ghasemi;Niusha Zare;Elena-Florentina Constantin;Amir Fahimipour;Neda Tajbakhsh;Niloofar Ghadimi
    • Imaging Science in Dentistry
    • /
    • v.54 no.3
    • /
    • pp.232-239
    • /
    • 2024
  • Purpose: The use of artificial intelligence (AI) and deep learning algorithms in dentistry, especially for processing radiographic images, has markedly increased. However, detailed information remains limited regarding the accuracy of these algorithms in detecting mandibular fractures. Materials and Methods: This meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Specific keywords were generated regarding the accuracy of AI algorithms in detecting mandibular fractures on radiographic images. Then, the PubMed/Medline, Scopus, Embase, and Web of Science databases were searched. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool was employed to evaluate potential bias in the selected studies. A pooled analysis of the relevant parameters was conducted using STATA version 17 (StataCorp, College Station, TX, USA), utilizing the metandi command. Results: Of the 49 studies reviewed, 5 met the inclusion criteria. All of the selected studies utilized convolutional neural network algorithms, albeit with varying backbone structures, and all evaluated panoramic radiography images. The pooled analysis yielded a sensitivity of 0.971 (95% confidence interval [CI]: 0.881-0.949), a specificity of 0.813 (95% CI: 0.797-0.824), and a diagnostic odds ratio of 7.109 (95% CI: 5.27-8.913). Conclusion: This review suggests that deep learning algorithms show potential for detecting mandibular fractures on panoramic radiography images. However, their effectiveness is currently limited by the small size and narrow scope of available datasets. Further research with larger and more diverse datasets is crucial to verify the accuracy of these tools in in practical dental settings.

Deep Learning Braille Block Recognition Method for Embedded Devices (임베디드 기기를 위한 딥러닝 점자블록 인식 방법)

  • Hee-jin Kim;Jae-hyuk Yoon;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.1-9
    • /
    • 2023
  • In this paper, we propose a method to recognize the braille blocks for embedded devices in real time through deep learning. First, a deep learning model for braille block recognition is trained on a high-performance computer, and the learning model is applied to a lightweight tool to apply to an embedded device. To recognize the walking information of the braille block, an algorithm is used to determine the path using the distance from the braille block in the image. After detecting braille blocks, bollards, and crosswalks through the YOLOv8 model in the video captured by the embedded device, the walking information is recognized through the braille block path discrimination algorithm. We apply the model lightweight tool to YOLOv8 to detect braille blocks in real time. The precision of YOLOv8 model weights is lowered from the existing 32 bits to 8 bits, and the model is optimized by applying the TensorRT optimization engine. As the result of comparing the lightweight model through the proposed method with the existing model, the path recognition accuracy is 99.05%, which is almost the same as the existing model, but the recognition speed is reduced by 59% compared to the existing model, processing about 15 frames per second.

A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional Categorical Image Database for Machine Learning Algorithm Training

  • Park, Sang Jun;Shin, Joo Young;Kim, Sangkeun;Son, Jaemin;Jung, Kyu-Hwan;Park, Kyu Hyung
    • Journal of Korean Medical Science
    • /
    • v.33 no.43
    • /
    • pp.239.1-239.12
    • /
    • 2018
  • Background: We described a novel multi-step retinal fundus image reading system for providing high-quality large data for machine learning algorithms, and assessed the grader variability in the large-scale dataset generated with this system. Methods: A 5-step retinal fundus image reading tool was developed that rates image quality, presence of abnormality, findings with location information, diagnoses, and clinical significance. Each image was evaluated by 3 different graders. Agreements among graders for each decision were evaluated. Results: The 234,242 readings of 79,458 images were collected from 55 licensed ophthalmologists during 6 months. The 34,364 images were graded as abnormal by at-least one rater. Of these, all three raters agreed in 46.6% in abnormality, while 69.9% of the images were rated as abnormal by two or more raters. Agreement rate of at-least two raters on a certain finding was 26.7%-65.2%, and complete agreement rate of all-three raters was 5.7%-43.3%. As for diagnoses, agreement of at-least two raters was 35.6%-65.6%, and complete agreement rate was 11.0%-40.0%. Agreement of findings and diagnoses were higher when restricted to images with prior complete agreement on abnormality. Retinal/glaucoma specialists showed higher agreements on findings and diagnoses of their corresponding subspecialties. Conclusion: This novel reading tool for retinal fundus images generated a large-scale dataset with high level of information, which can be utilized in future development of machine learning-based algorithms for automated identification of abnormal conditions and clinical decision supporting system. These results emphasize the importance of addressing grader variability in algorithm developments.