• Title/Summary/Keyword: training sets

Search Result 509, Processing Time 0.026 seconds

Research Trends and Problems on Cultivation Practice of Daesoonjinrihoe (대순진리회 수행 연구의 경향과 과제)

  • Cha, Seon-keun
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.24_1
    • /
    • pp.315-349
    • /
    • 2014
  • This paper was carried out to bring the researches on Cultivation Practice of Daesoonjinrihoe which have been at a standstill after analyzing the directions of studies on Cultivation Practice and diagnosing its problems, in addition to that, the paper was also conducted in a way of discussing the research directions in the future. This work enables scholars who have interests in Daesoon Thoughts to easily comprehend over the length and breadth of Cultivation Practice of Daesoonjinrihoe as well as help them understand what level of researches regarding Cultivation Practice has been demanded. Furthermore, this paper will be a step-stone for scholars to ponder how and on what perspective they approach a wide variety of studies on Daesoon Thoughts. The problems reflected on the previous researches on Cultivation Practice are summarized as follows: first, except a few researches in general, problem recognition, research target, style, method, and content are not diverged from the frame defined by Jang Byeong-Gil, who set it up in Daesoon Religion and Thought (Daesoon Jonggyo Sasang) in 1989. Proliferating overlapped researches without developing problem awareness is of great concern. And such researching climate has gradually set in. Secondly, there are numerous researches intending to reveal the researcher's forceful attitude implying faith. Thirdly, most of the previous researches neglect to focus on defining the range of researches. Fourthly, when defining concepts, more thorough insight is needed. Lastly, the researches on analysing symbols and attempting signification analysis are relatively few, only to find many errors. To solve these problems, this paper suggests to develop theories which back up Cultivation Practice by researching on the fields of theory of mind-nature(心性), theory of mind-qi(心氣), theory of pain, Religious Ethics, viewpoint of God/gods, and psychology. Secondly, all the symbols and meanings of elements shown in Cultivation Practice need analyzing more elaborately sophisticatedly and more in-depth. In order to fulfil this goal, by adapting the recent trends of historical studies, it is essential to attempt to engraft Cultivation Practice of Daesoonjinrihoe on cultural phenomena, to analyze thick layers of meanings beneath its surface, to interpret differently, utilizing various perspectives such as focusing on the gender problems, and to extract true meanings out of Cultivation Practice by analyzing everyday events which can occur in real cultivation practices. Thirdly, the terms and concepts regarding Cultivation Practice base the principle themselves. Fourthly, by utilizing methodology of comparative studies on religions, the comparative researches on cultivation practice of different religious traditions are also needed. Lastly, the history of aspects on Cultivation Practice such as transition of mantras, processes which have been conducted through proprieties of prayer and training should be collected and classified. In this context, this work is very important since it helps understand the aspects of transition of originality and characteristics in Cultivation Practice of Daesoonjinrihoe according to passage of time.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.

Machine Learning-Based Prediction of COVID-19 Severity and Progression to Critical Illness Using CT Imaging and Clinical Data

  • Subhanik Purkayastha;Yanhe Xiao;Zhicheng Jiao;Rujapa Thepumnoeysuk;Kasey Halsey;Jing Wu;Thi My Linh Tran;Ben Hsieh;Ji Whae Choi;Dongcui Wang;Martin Vallieres;Robin Wang;Scott Collins;Xue Feng;Michael Feldman;Paul J. Zhang;Michael Atalay;Ronnie Sebro;Li Yang;Yong Fan;Wei-hua Liao;Harrison X. Bai
    • Korean Journal of Radiology
    • /
    • v.22 no.7
    • /
    • pp.1213-1224
    • /
    • 2021
  • Objective: To develop a machine learning (ML) pipeline based on radiomics to predict Coronavirus Disease 2019 (COVID-19) severity and the future deterioration to critical illness using CT and clinical variables. Materials and Methods: Clinical data were collected from 981 patients from a multi-institutional international cohort with real-time polymerase chain reaction-confirmed COVID-19. Radiomics features were extracted from chest CT of the patients. The data of the cohort were randomly divided into training, validation, and test sets using a 7:1:2 ratio. A ML pipeline consisting of a model to predict severity and time-to-event model to predict progression to critical illness were trained on radiomics features and clinical variables. The receiver operating characteristic area under the curve (ROC-AUC), concordance index (C-index), and time-dependent ROC-AUC were calculated to determine model performance, which was compared with consensus CT severity scores obtained by visual interpretation by radiologists. Results: Among 981 patients with confirmed COVID-19, 274 patients developed critical illness. Radiomics features and clinical variables resulted in the best performance for the prediction of disease severity with a highest test ROC-AUC of 0.76 compared with 0.70 (0.76 vs. 0.70, p = 0.023) for visual CT severity score and clinical variables. The progression prediction model achieved a test C-index of 0.868 when it was based on the combination of CT radiomics and clinical variables compared with 0.767 when based on CT radiomics features alone (p < 0.001), 0.847 when based on clinical variables alone (p = 0.110), and 0.860 when based on the combination of visual CT severity scores and clinical variables (p = 0.549). Furthermore, the model based on the combination of CT radiomics and clinical variables achieved time-dependent ROC-AUCs of 0.897, 0.933, and 0.927 for the prediction of progression risks at 3, 5 and 7 days, respectively. Conclusion: CT radiomics features combined with clinical variables were predictive of COVID-19 severity and progression to critical illness with fairly high accuracy.

Accuracy of posteroanterior cephalogram landmarks and measurements identification using a cascaded convolutional neural network algorithm: A multicenter study

  • Sung-Hoon Han;Jisup Lim;Jun-Sik Kim;Jin-Hyoung Cho;Mihee Hong;Minji Kim;Su-Jung Kim;Yoon-Ji Kim;Young Ho Kim;Sung-Hoon Lim;Sang Jin Sung;Kyung-Hwa Kang;Seung-Hak Baek;Sung-Kwon Choi;Namkug Kim
    • The korean journal of orthodontics
    • /
    • v.54 no.1
    • /
    • pp.48-58
    • /
    • 2024
  • Objective: To quantify the effects of midline-related landmark identification on midline deviation measurements in posteroanterior (PA) cephalograms using a cascaded convolutional neural network (CNN). Methods: A total of 2,903 PA cephalogram images obtained from 9 university hospitals were divided into training, internal validation, and test sets (n = 2,150, 376, and 377). As the gold standard, 2 orthodontic professors marked the bilateral landmarks, including the frontozygomatic suture point and latero-orbitale (LO), and the midline landmarks, including the crista galli, anterior nasal spine (ANS), upper dental midpoint (UDM), lower dental midpoint (LDM), and menton (Me). For the test, Examiner-1 and Examiner-2 (3-year and 1-year orthodontic residents) and the Cascaded-CNN models marked the landmarks. After point-to-point errors of landmark identification, the successful detection rate (SDR) and distance and direction of the midline landmark deviation from the midsagittal line (ANS-mid, UDM-mid, LDM-mid, and Me-mid) were measured, and statistical analysis was performed. Results: The cascaded-CNN algorithm showed a clinically acceptable level of point-to-point error (1.26 mm vs. 1.57 mm in Examiner-1 and 1.75 mm in Examiner-2). The average SDR within the 2 mm range was 83.2%, with high accuracy at the LO (right, 96.9%; left, 97.1%), and UDM (96.9%). The absolute measurement errors were less than 1 mm for ANS-mid, UDM-mid, and LDM-mid compared with the gold standard. Conclusions: The cascaded-CNN model may be considered an effective tool for the auto-identification of midline landmarks and quantification of midline deviation in PA cephalograms of adult patients, regardless of variations in the image acquisition method.

Prediction of Residual Axillary Nodal Metastasis Following Neoadjuvant Chemotherapy for Breast Cancer: Radiomics Analysis Based on Chest Computed Tomography

  • Hyo-jae Lee;Anh-Tien Nguyen;Myung Won Song;Jong Eun Lee;Seol Bin Park;Won Gi Jeong;Min Ho Park;Ji Shin Lee;Ilwoo Park;Hyo Soon Lim
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.498-511
    • /
    • 2023
  • Objective: To evaluate the diagnostic performance of chest computed tomography (CT)-based qualitative and radiomics models for predicting residual axillary nodal metastasis after neoadjuvant chemotherapy (NAC) for patients with clinically node-positive breast cancer. Materials and Methods: This retrospective study included 226 women (mean age, 51.4 years) with clinically node-positive breast cancer treated with NAC followed by surgery between January 2015 and July 2021. Patients were randomly divided into the training and test sets (4:1 ratio). The following predictive models were built: a qualitative CT feature model using logistic regression based on qualitative imaging features of axillary nodes from the pooled data obtained using the visual interpretations of three radiologists; three radiomics models using radiomics features from three (intranodal, perinodal, and combined) different regions of interest (ROIs) delineated on pre-NAC CT and post-NAC CT using a gradient-boosting classifier; and fusion models integrating clinicopathologic factors with the qualitative CT feature model (referred to as clinical-qualitative CT feature models) or with the combined ROI radiomics model (referred to as clinical-radiomics models). The area under the curve (AUC) was used to assess and compare the model performance. Results: Clinical N stage, biological subtype, and primary tumor response indicated by imaging were associated with residual nodal metastasis during the multivariable analysis (all P < 0.05). The AUCs of the qualitative CT feature model and radiomics models (intranodal, perinodal, and combined ROI models) according to post-NAC CT were 0.642, 0.812, 0.762, and 0.832, respectively. The AUCs of the clinical-qualitative CT feature model and clinical-radiomics model according to post-NAC CT were 0.740 and 0.866, respectively. Conclusion: CT-based predictive models showed good diagnostic performance for predicting residual nodal metastasis after NAC. Quantitative radiomics analysis may provide a higher level of performance than qualitative CT features models. Larger multicenter studies should be conducted to confirm their performance.

Differentiating Uterine Sarcoma From Atypical Leiomyoma on Preoperative Magnetic Resonance Imaging Using Logistic Regression Classifier: Added Value of Diffusion-Weighted Imaging-Based Quantitative Parameters

  • Hokun Kim;Sung Eun Rha;Yu Ri Shin;Eu Hyun Kim;Soo Youn Park;Su-Lim Lee;Ahwon Lee;Mee-Ran Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.1
    • /
    • pp.43-54
    • /
    • 2024
  • Objective: To evaluate the added value of diffusion-weighted imaging (DWI)-based quantitative parameters to distinguish uterine sarcomas from atypical leiomyomas on preoperative magnetic resonance imaging (MRI). Materials and Methods: A total of 138 patients (age, 43.7 ± 10.3 years) with uterine sarcoma (n = 44) and atypical leiomyoma (n = 94) were retrospectively collected from four institutions. The cohort was randomly divided into training (84/138, 60.0%) and validation (54/138, 40.0%) sets. Two independent readers evaluated six qualitative MRI features and two DWI-based quantitative parameters for each index tumor. Multivariable logistic regression was used to identify the relevant qualitative MRI features. Diagnostic classifiers based on qualitative MRI features alone and in combination with DWI-based quantitative parameters were developed using a logistic regression algorithm. The diagnostic performance of the classifiers was evaluated using a cross-table analysis and calculation of the area under the receiver operating characteristic curve (AUC). Results: Mean apparent diffusion coefficient value of uterine sarcoma was lower than that of atypical leiomyoma (mean ± standard deviation, 0.94 ± 0.30 10-3 mm2/s vs. 1.23 ± 0.25 10-3 mm2/s; P < 0.001), and the relative contrast ratio was higher in the uterine sarcoma (8.16 ± 2.94 vs. 4.19 ± 2.66; P < 0.001). Selected qualitative MRI features included ill-defined margin (adjusted odds ratio [aOR], 17.9; 95% confidence interval [CI], 1.41-503, P = 0.040), intratumoral hemorrhage (aOR, 27.3; 95% CI, 3.74-596, P = 0.006), and absence of T2 dark area (aOR, 83.5; 95% CI, 12.4-1916, P < 0.001). The classifier that combined qualitative MRI features and DWI-based quantitative parameters showed significantly better performance than without DWI-based parameters in the validation set (AUC, 0.92 vs. 0.78; P < 0.001). Conclusion: The addition of DWI-based quantitative parameters to qualitative MRI features improved the diagnostic performance of the logistic regression classifier in differentiating uterine sarcomas from atypical leiomyomas on preoperative MRI.

Deep Learning-Assisted Diagnosis of Pediatric Skull Fractures on Plain Radiographs

  • Jae Won Choi;Yeon Jin Cho;Ji Young Ha;Yun Young Lee;Seok Young Koh;June Young Seo;Young Hun Choi;Jung-Eun Cheon;Ji Hoon Phi;Injoon Kim;Jaekwang Yang;Woo Sun Kim
    • Korean Journal of Radiology
    • /
    • v.23 no.3
    • /
    • pp.343-354
    • /
    • 2022
  • Objective: To develop and evaluate a deep learning-based artificial intelligence (AI) model for detecting skull fractures on plain radiographs in children. Materials and Methods: This retrospective multi-center study consisted of a development dataset acquired from two hospitals (n = 149 and 264) and an external test set (n = 95) from a third hospital. Datasets included children with head trauma who underwent both skull radiography and cranial computed tomography (CT). The development dataset was split into training, tuning, and internal test sets in a ratio of 7:1:2. The reference standard for skull fracture was cranial CT. Two radiology residents, a pediatric radiologist, and two emergency physicians participated in a two-session observer study on an external test set with and without AI assistance. We obtained the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity along with their 95% confidence intervals (CIs). Results: The AI model showed an AUROC of 0.922 (95% CI, 0.842-0.969) in the internal test set and 0.870 (95% CI, 0.785-0.930) in the external test set. The model had a sensitivity of 81.1% (95% CI, 64.8%-92.0%) and specificity of 91.3% (95% CI, 79.2%-97.6%) for the internal test set and 78.9% (95% CI, 54.4%-93.9%) and 88.2% (95% CI, 78.7%-94.4%), respectively, for the external test set. With the model's assistance, significant AUROC improvement was observed in radiology residents (pooled results) and emergency physicians (pooled results) with the difference from reading without AI assistance of 0.094 (95% CI, 0.020-0.168; p = 0.012) and 0.069 (95% CI, 0.002-0.136; p = 0.043), respectively, but not in the pediatric radiologist with the difference of 0.008 (95% CI, -0.074-0.090; p = 0.850). Conclusion: A deep learning-based AI model improved the performance of inexperienced radiologists and emergency physicians in diagnosing pediatric skull fractures on plain radiographs.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

Analysis of Semantic Attributes of Korean Words for Sound Quality Evaluation in Music Listening (음악감상에서의 음질 평가를 위한 한국어 어휘의 의미론적 속성 분석)

  • Lee, Eun Young;Yoo, Ga Eul;Lee, Youngmee
    • Journal of Music and Human Behavior
    • /
    • v.21 no.2
    • /
    • pp.107-134
    • /
    • 2024
  • This study aims to classify the semantic words commonly used to evaluate sound quality and to analyze their differences in reflecting the level of musical stimuli. Participants were thirty-one music majors in their 20s and 30s, with an average of 9.4 years of professional training. Each participant listened to nine pieces of music with variations in texture and instrument type and evaluated them using 18 pairs of semantic words describing sound quality. A factor analysis was conducted to group words influenced by the same latent factor, and a multivariate ANOVA determined the differences in ratings based on texture and instrument type. Radar charts were also drawn based on the identified sets of semantic words. The results showed that four factors were identified, and the word pairs 'soft-hard,' 'dull-sharp,' 'muddy-clean' and 'low-high' showed significant differences based on the level of musical stimuli. The radar charts effectively distinguished the sound quality evaluations for each music. These results indicate that developing Korean semantic words for sound quality evaluation requires a structure different from the previous categories used in Western countries and that linguistic and cultural factors are crucial. This study will provide foundational data for developing a verbal sound quality evaluation framework suited to the Korean context, while reflecting acoustic attributes in music listening.

Recognizing the Direction of Action using Generalized 4D Features (일반화된 4차원 특징을 이용한 행동 방향 인식)

  • Kim, Sun-Jung;Kim, Soo-Wan;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.518-528
    • /
    • 2014
  • In this paper, we propose a method to recognize the action direction of human by developing 4D space-time (4D-ST, [x,y,z,t]) features. For this, we propose 4D space-time interest points (4D-STIPs, [x,y,z,t]) which are extracted using 3D space (3D-S, [x,y,z]) volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space (2D-S, [x,y]) viewpoint can be generated by projecting the 3D-S volumes and 4D-STIPs on corresponding image planes in training step. We can recognize the directions of actors in the test video since our training sets, which are projections of 3D-S volumes and 4D-STIPs to various image planes, contain the direction information. The process for recognizing action direction is divided into two steps, firstly we recognize the class of actions and then recognize the action direction using direction information. For the action and direction of action recognition, with the projected 3D-S volumes and 4D-STIPs we construct motion history images (MHIs) and non-motion history images (NMHIs) which encode the moving and non-moving parts of an action respectively. For the action recognition, features are trained by support vector data description (SVDD) according to the action class and recognized by support vector domain density description (SVDDD). For the action direction recognition after recognizing actions, each actions are trained using SVDD according to the direction class and then recognized by SVDDD. In experiments, we train the models using 3D-S volumes from INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset and recognize action direction by constructing a new SNU dataset made for evaluating the action direction recognition.