• Title/Summary/Keyword: artificial intelligence algorithm

Search Result 876, Processing Time 0.025 seconds

An Approach Using LSTM Model to Forecasting Customer Congestion Based on Indoor Human Tracking (실내 사람 위치 추적 기반 LSTM 모델을 이용한 고객 혼잡 예측 연구)

  • Hee-ju Chae;Kyeong-heon Kwak;Da-yeon Lee;Eunkyung Kim
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.43-53
    • /
    • 2023
  • In this detailed and comprehensive study, our primary focus has been placed on accurately gauging the number of visitors and their real-time locations in commercial spaces. Particularly, in a real cafe, using security cameras, we have developed a system that can offer live updates on available seating and predict future congestion levels. By employing YOLO, a real-time object detection and tracking algorithm, the number of visitors and their respective locations in real-time are also monitored. This information is then used to update a cafe's indoor map, thereby enabling users to easily identify available seating. Moreover, we developed a model that predicts the congestion of a cafe in real time. The sophisticated model, designed to learn visitor count and movement patterns over diverse time intervals, is based on Long Short Term Memory (LSTM) to address the vanishing gradient problem and Sequence-to-Sequence (Seq2Seq) for processing data with temporal relationships. This innovative system has the potential to significantly improve cafe management efficiency and customer satisfaction by delivering reliable predictions of cafe congestion to all users. Our groundbreaking research not only demonstrates the effectiveness and utility of indoor location tracking technology implemented through security cameras but also proposes potential applications in other commercial spaces.

Deep learning algorithms for identifying 79 dental implant types (79종의 임플란트 식별을 위한 딥러닝 알고리즘)

  • Hyun-Jun, Kong;Jin-Yong, Yoo;Sang-Ho, Eom;Jun-Hyeok, Lee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.38 no.4
    • /
    • pp.196-203
    • /
    • 2022
  • Purpose: This study aimed to evaluate the accuracy and clinical usability of an identification model using deep learning for 79 dental implant types. Materials and Methods: A total of 45396 implant fixture images were collected through panoramic radiographs of patients who received implant treatment from 2001 to 2020 at 30 dental clinics. The collected implant images were 79 types from 18 manufacturers. EfficientNet and Meta Pseudo Labels algorithms were used. For EfficientNet, EfficientNet-B0 and EfficientNet-B4 were used as submodels. For Meta Pseudo Labels, two models were applied according to the widen factor. Top 1 accuracy was measured for EfficientNet and top 1 and top 5 accuracy for Meta Pseudo Labels were measured. Results: EfficientNet-B0 and EfficientNet-B4 showed top 1 accuracy of 89.4. Meta Pseudo Labels 1 showed top 1 accuracy of 87.96, and Meta pseudo labels 2 with increased widen factor showed 88.35. In Top5 Accuracy, the score of Meta Pseudo Labels 1 was 97.90, which was 0.11% higher than 97.79 of Meta Pseudo Labels 2. Conclusion: All four deep learning algorithms used for implant identification in this study showed close to 90% accuracy. In order to increase the clinical applicability of deep learning for implant identification, it will be necessary to collect a wider amount of data and develop a fine-tuned algorithm for implant identification.

Deep Learning based Estimation of Depth to Bearing Layer from In-situ Data (딥러닝 기반 국내 지반의 지지층 깊이 예측)

  • Jang, Young-Eun;Jung, Jaeho;Han, Jin-Tae;Yu, Yonggyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.3
    • /
    • pp.35-42
    • /
    • 2022
  • The N-value from the Standard Penetration Test (SPT), which is one of the representative in-situ test, is an important index that provides basic geological information and the depth of the bearing layer for the design of geotechnical structures. In the aspect of time and cost-effectiveness, there is a need to carry out a representative sampling test. However, the various variability and uncertainty are existing in the soil layer, so it is difficult to grasp the characteristics of the entire field from the limited test results. Thus the spatial interpolation techniques such as Kriging and IDW (inverse distance weighted) have been used for predicting unknown point from existing data. Recently, in order to increase the accuracy of interpolation results, studies that combine the geotechnics and deep learning method have been conducted. In this study, based on the SPT results of about 22,000 holes of ground survey, a comparative study was conducted to predict the depth of the bearing layer using deep learning methods and IDW. The average error among the prediction results of the bearing layer of each analysis model was 3.01 m for IDW, 3.22 m and 2.46 m for fully connected network and PointNet, respectively. The standard deviation was 3.99 for IDW, 3.95 and 3.54 for fully connected network and PointNet. As a result, the point net deep learing algorithm showed improved results compared to IDW and other deep learning method.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.

Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis

  • Chenggong Yan;Jie Lin;Haixia Li;Jun Xu;Tianjing Zhang;Hao Chen;Henry C. Woodruff;Guangyao Wu;Siqi Zhang;Yikai Xu;Philippe Lambin
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.983-993
    • /
    • 2021
  • Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results: With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion: The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.

Development of Deep Recognition of Similarity in Show Garden Design Based on Deep Learning (딥러닝을 활용한 전시 정원 디자인 유사성 인지 모형 연구)

  • Cho, Woo-Yun;Kwon, Jin-Wook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.52 no.2
    • /
    • pp.96-109
    • /
    • 2024
  • The purpose of this study is to propose a method for evaluating the similarity of Show gardens using Deep Learning models, specifically VGG-16 and ResNet50. A model for judging the similarity of show gardens based on VGG-16 and ResNet50 models was developed, and was referred to as DRG (Deep Recognition of similarity in show Garden design). An algorithm utilizing GAP and Pearson correlation coefficient was employed to construct the model, and the accuracy of similarity was analyzed by comparing the total number of similar images derived at 1st (Top1), 3rd (Top3), and 5th (Top5) ranks with the original images. The image data used for the DRG model consisted of a total of 278 works from the Le Festival International des Jardins de Chaumont-sur-Loire, 27 works from the Seoul International Garden Show, and 17 works from the Korea Garden Show. Image analysis was conducted using the DRG model for both the same group and different groups, resulting in the establishment of guidelines for assessing show garden similarity. First, overall image similarity analysis was best suited for applying data augmentation techniques based on the ResNet50 model. Second, for image analysis focusing on internal structure and outer form, it was effective to apply a certain size filter (16cm × 16cm) to generate images emphasizing form and then compare similarity using the VGG-16 model. It was suggested that an image size of 448 × 448 pixels and the original image in full color are the optimal settings. Based on these research findings, a quantitative method for assessing show gardens is proposed and it is expected to contribute to the continuous development of garden culture through interdisciplinary research moving forward.

A study on the design of an efficient hardware and software mixed-mode image processing system for detecting patient movement (환자움직임 감지를 위한 효율적인 하드웨어 및 소프트웨어 혼성 모드 영상처리시스템설계에 관한 연구)

  • Seungmin Jung;Euisung Jung;Myeonghwan Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • In this paper, we propose an efficient image processing system to detect and track the movement of specific objects such as patients. The proposed system extracts the outline area of an object from a binarized difference image by applying a thinning algorithm that enables more precise detection compared to previous algorithms and is advantageous for mixed-mode design. The binarization and thinning steps, which require a lot of computation, are designed based on RTL (Register Transfer Level) and replaced with optimized hardware blocks through logic circuit synthesis. The designed binarization and thinning block was synthesized into a logic circuit using the standard 180n CMOS library and its operation was verified through simulation. To compare software-based performance, performance analysis of binary and thinning operations was also performed by applying sample images with 640 × 360 resolution in a 32-bit FPGA embedded system environment. As a result of verification, it was confirmed that the mixed-mode design can improve the processing speed by 93.8% in the binary and thinning stages compared to the previous software-only processing speed. The proposed mixed-mode system for object recognition is expected to be able to efficiently monitor patient movements even in an edge computing environment where artificial intelligence networks are not applied.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

Feasibility of a Clinical-Radiomics Model to Predict the Outcomes of Acute Ischemic Stroke

  • Yiran Zhou;Di Wu;Su Yan;Yan Xie;Shun Zhang;Wenzhi Lv;Yuanyuan Qin;Yufei Liu;Chengxia Liu;Jun Lu;Jia Li;Hongquan Zhu;Weiyin Vivian Liu;Huan Liu;Guiling Zhang;Wenzhen Zhu
    • Korean Journal of Radiology
    • /
    • v.23 no.8
    • /
    • pp.811-820
    • /
    • 2022
  • Objective: To develop a model incorporating radiomic features and clinical factors to accurately predict acute ischemic stroke (AIS) outcomes. Materials and Methods: Data from 522 AIS patients (382 male [73.2%]; mean age ± standard deviation, 58.9 ± 11.5 years) were randomly divided into the training (n = 311) and validation cohorts (n = 211). According to the modified Rankin Scale (mRS) at 6 months after hospital discharge, prognosis was dichotomized into good (mRS ≤ 2) and poor (mRS > 2); 1310 radiomics features were extracted from diffusion-weighted imaging and apparent diffusion coefficient maps. The minimum redundancy maximum relevance algorithm and the least absolute shrinkage and selection operator logistic regression method were implemented to select the features and establish a radiomics model. Univariable and multivariable logistic regression analyses were performed to identify the clinical factors and construct a clinical model. Ultimately, a multivariable logistic regression analysis incorporating independent clinical factors and radiomics score was implemented to establish the final combined prediction model using a backward step-down selection procedure, and a clinical-radiomics nomogram was developed. The models were evaluated using calibration, receiver operating characteristic (ROC), and decision curve analyses. Results: Age, sex, stroke history, diabetes, baseline mRS, baseline National Institutes of Health Stroke Scale score, and radiomics score were independent predictors of AIS outcomes. The area under the ROC curve of the clinical-radiomics model was 0.868 (95% confidence interval, 0.825-0.910) in the training cohort and 0.890 (0.844-0.936) in the validation cohort, which was significantly larger than that of the clinical or radiomics models. The clinical radiomics nomogram was well calibrated (p > 0.05). The decision curve analysis indicated its clinical usefulness. Conclusion: The clinical-radiomics model outperformed individual clinical or radiomics models and achieved satisfactory performance in predicting AIS outcomes.

Research on Training and Implementation of Deep Learning Models for Web Page Analysis (웹페이지 분석을 위한 딥러닝 모델 학습과 구현에 관한 연구)

  • Jung Hwan Kim;Jae Won Cho;Jin San Kim;Han Jin Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.517-524
    • /
    • 2024
  • This study aims to train and implement a deep learning model for the fusion of website creation and artificial intelligence, in the era known as the AI revolution following the launch of the ChatGPT service. The deep learning model was trained using 3,000 collected web page images, processed based on a system of component and layout classification. This process was divided into three stages. First, prior research on AI models was reviewed to select the most appropriate algorithm for the model we intended to implement. Second, suitable web page and paragraph images were collected, categorized, and processed. Third, the deep learning model was trained, and a serving interface was integrated to verify the actual outcomes of the model. This implemented model will be used to detect multiple paragraphs on a web page, analyzing the number of lines, elements, and features in each paragraph, and deriving meaningful data based on the classification system. This process is expected to evolve, enabling more precise analysis of web pages. Furthermore, it is anticipated that the development of precise analysis techniques will lay the groundwork for research into AI's capability to automatically generate perfect web pages.