• Title/Summary/Keyword: datasets

Search Result 2,091, Processing Time 0.039 seconds

Deep Learning Algorithm for Automated Segmentation and Volume Measurement of the Liver and Spleen Using Portal Venous Phase Computed Tomography Images

  • Yura Ahn;Jee Seok Yoon;Seung Soo Lee;Heung-Il Suk;Jung Hee Son;Yu Sub Sung;Yedaun Lee;Bo-Kyeong Kang;Ho Sung Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.8
    • /
    • pp.987-997
    • /
    • 2020
  • Objective: Measurement of the liver and spleen volumes has clinical implications. Although computed tomography (CT) volumetry is considered to be the most reliable noninvasive method for liver and spleen volume measurement, it has limited application in clinical practice due to its time-consuming segmentation process. We aimed to develop and validate a deep learning algorithm (DLA) for fully automated liver and spleen segmentation using portal venous phase CT images in various liver conditions. Materials and Methods: A DLA for liver and spleen segmentation was trained using a development dataset of portal venous CT images from 813 patients. Performance of the DLA was evaluated in two separate test datasets: dataset-1 which included 150 CT examinations in patients with various liver conditions (i.e., healthy liver, fatty liver, chronic liver disease, cirrhosis, and post-hepatectomy) and dataset-2 which included 50 pairs of CT examinations performed at ours and other institutions. The performance of the DLA was evaluated using the dice similarity score (DSS) for segmentation and Bland-Altman 95% limits of agreement (LOA) for measurement of the volumetric indices, which was compared with that of ground truth manual segmentation. Results: In test dataset-1, the DLA achieved a mean DSS of 0.973 and 0.974 for liver and spleen segmentation, respectively, with no significant difference in DSS across different liver conditions (p = 0.60 and 0.26 for the liver and spleen, respectively). For the measurement of volumetric indices, the Bland-Altman 95% LOA was -0.17 ± 3.07% for liver volume and -0.56 ± 3.78% for spleen volume. In test dataset-2, DLA performance using CT images obtained at outside institutions and our institution was comparable for liver (DSS, 0.982 vs. 0.983; p = 0.28) and spleen (DSS, 0.969 vs. 0.968; p = 0.41) segmentation. Conclusion: The DLA enabled highly accurate segmentation and volume measurement of the liver and spleen using portal venous phase CT images of patients with various liver conditions.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.

A New Correction Method for Ship's Viscous Magnetization Effect on Shipboard Three-component Magnetic Data Using a Total Field Magnetometer (총자력계를 이용한 선상 삼성분 자기 데이터의 선박 점성 자화 효과에 대한 새로운 보정 방법 연구)

  • Hanjin Choe;Nobukazu Seama
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.2
    • /
    • pp.119-128
    • /
    • 2024
  • Marine magnetic surveys provide a rapid and cost-effective method for pioneer geophysical survey for many purposes. Sea-surface magnetometers offer high accuracy but are limited to measuring the scalar total magnetic field and require dedicated cruise missions. Shipboard three-component magnetometers, on the other hand, can collect vector three components and applicable to any cruise missions. However, correcting for the ship's magnetic field, particularly viscous magnetization, still remains a challenge. This study proposes a new additional correction method for ship's viscous magnetization effect in vector data acquired by shipboard three-component magnetometer. This method utilizes magnetic data collected simultaneously with a sea-surface magnetometer providing total magnetic field measurements. Our method significantly reduces deviations between the two datasets, resulting in corrected vector anomalies with errors as low as 7-25 nT. These tiny errors are possibly caused by the vector magnetic anomaly and its related viscous magnetization. This method is expected to significantly improve the accuracy of shipborne magnetic surveys by providing corrected vector components. This will enhance magnetic interpretations and might be useful for understanding plate tectonics, geological structures, hydrothermal deposits, and more.

Real-World Application of Artificial Intelligence for Detecting Pathologic Gastric Atypia and Neoplastic Lesions

  • Young Hoon Chang;Cheol Min Shin;Hae Dong Lee;Jinbae Park;Jiwoon Jeon;Soo-Jeong Cho;Seung Joo Kang;Jae-Yong Chung;Yu Kyung Jun;Yonghoon Choi;Hyuk Yoon;Young Soo Park;Nayoung Kim;Dong Ho Lee
    • Journal of Gastric Cancer
    • /
    • v.24 no.3
    • /
    • pp.327-340
    • /
    • 2024
  • Purpose: Results of initial endoscopic biopsy of gastric lesions often differ from those of the final pathological diagnosis. We evaluated whether an artificial intelligence-based gastric lesion detection and diagnostic system, ENdoscopy as AI-powered Device Computer Aided Diagnosis for Gastroscopy (ENAD CAD-G), could reduce this discrepancy. Materials and Methods: We retrospectively collected 24,948 endoscopic images of early gastric cancers (EGCs), dysplasia, and benign lesions from 9,892 patients who underwent esophagogastroduodenoscopy between 2011 and 2021. The diagnostic performance of ENAD CAD-G was evaluated using the following real-world datasets: patients referred from community clinics with initial biopsy results of atypia (n=154), participants who underwent endoscopic resection for neoplasms (Internal video set, n=140), and participants who underwent endoscopy for screening or suspicion of gastric neoplasm referred from community clinics (External video set, n=296). Results: ENAD CAD-G classified the referred gastric lesions of atypia into EGC (accuracy, 82.47%; 95% confidence interval [CI], 76.46%-88.47%), dysplasia (88.31%; 83.24%-93.39%), and benign lesions (83.12%; 77.20%-89.03%). In the Internal video set, ENAD CAD-G identified dysplasia and EGC with diagnostic accuracies of 88.57% (95% CI, 83.30%-93.84%) and 91.43% (86.79%-96.07%), respectively, compared with an accuracy of 60.71% (52.62%-68.80%) for the initial biopsy results (P<0.001). In the External video set, ENAD CAD-G classified EGC, dysplasia, and benign lesions with diagnostic accuracies of 87.50% (83.73%-91.27%), 90.54% (87.21%-93.87%), and 88.85% (85.27%-92.44%), respectively. Conclusions: ENAD CAD-G is superior to initial biopsy for the detection and diagnosis of gastric lesions that require endoscopic resection. ENAD CAD-G can assist community endoscopists in identifying gastric lesions that require endoscopic resection.

Analyzing the Co-occurrence of Endangered Brackish-Water Snails with Other Species in Ecosystems Using Association Rule Learning and Clustering Analysis (연관 규칙 학습과 군집분석을 활용한 멸종위기 기수갈고둥과 생태계 내 종 간 연관성 분석)

  • Sung-Ho Lim;Yuno Do
    • Korean Journal of Ecology and Environment
    • /
    • v.57 no.2
    • /
    • pp.83-91
    • /
    • 2024
  • This study utilizes association rule learning and clustering analysis to explore the co-occurrence and relationships within ecosystems, focusing on the endangered brackish-water snail Clithon retropictum, classified as Class II endangered wildlife in Korea. The goal is to analyze co-occurrence patterns between brackish-water snails and other species to better understand their roles within the ecosystem. By examining co-occurrence patterns and relationships among species in large datasets, association rule learning aids in identifying significant relationships. Meanwhile, K-means and hierarchical clustering analyses are employed to assess ecological similarities and differences among species, facilitating their classification based on ecological characteristics. The findings reveal a significant level of relationship and co-occurrence between brackish-water snails and other species. This research underscores the importance of understanding these relationships for the conservation of endangered species like C. retropictum and for developing effective ecosystem management strategies. By emphasizing the role of a data-driven approach, this study contributes to advancing our knowledge on biodiversity conservation and ecosystem health, proposing new directions for future research in ecosystem management and conservation strategies.

A Study on Generation Quality Comparison of Concrete Damage Image Using Stable Diffusion Base Models (Stable diffusion의 기저 모델에 따른 콘크리트 손상 영상의 생성 품질 비교 연구)

  • Seung-Bo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.4
    • /
    • pp.55-61
    • /
    • 2024
  • Recently, the number of aging concrete structures is steadily increasing. This is because many of these structures are reaching their expected lifespan. Such structures require accurate inspections and persistent maintenance. Otherwise, their original functions and performance may degrade, potentially leading to safety accidents. Therefore, research on objective inspection technologies using deep learning and computer vision is actively being conducted. High-resolution images can accurately observe not only micro cracks but also spalling and exposed rebar, and deep learning enables automated detection. High detection performance in deep learning is only guaranteed with diverse and numerous training datasets. However, surface damage to concrete is not commonly captured in images, resulting in a lack of training data. To overcome this limitation, this study proposed a method for generating concrete surface damage images, including cracks, spalling, and exposed rebar, using stable diffusion. This method synthesizes new damage images by paired text and image data. For this purpose, a training dataset of 678 images was secured, and fine-tuning was performed through low-rank adaptation. The quality of the generated images was compared according to three base models of stable diffusion. As a result, a method to synthesize the most diverse and high-quality concrete damage images was developed. This research is expected to address the issue of data scarcity and contribute to improving the accuracy of deep learning-based damage detection algorithms in the future.

UCHL1 Overexpression Is Related to the Aggressive Phenotype of Non-small Cell Lung Cancer

  • Chi Young Kim;Eun Hye Lee;Se Hyun Kwak;Sang Hoon Lee;Eun Young Kim;Min Kyoung Park;Yoon Jin Cha;Yoon Soo Chang
    • Tuberculosis and Respiratory Diseases
    • /
    • v.87 no.4
    • /
    • pp.494-504
    • /
    • 2024
  • Background: Ubiquitin C-terminal hydrolase L1 (UCHL1), which encodes thiol protease that hydrolyzes a peptide bond at the C-terminal glycine residue of ubiquitin, regulates cell differentiation, proliferation, transcriptional regulation, and numerous other biological processes and may be involved in lung cancer progression. UCHL1 is mainly expressed in the brain and plays a tumor-promoting role in a few cancer types; however, there are limited reports regarding its role in lung cancer. Methods: Single-cell RNA (scRNA) sequencing using 10X chromium v3 was performed on a paired normal-appearing and tumor tissue from surgical specimens of a patient who showed unusually rapid progression. To validate clinical implication of the identified biomarkers, immunohistochemical (IHC) analysis was performed on 48 non-small cell lung cancer (NSCLC) tissue specimens, and the correlation with clinical parameters was evaluated. Results: We identified 500 genes overexpressed in tumor tissue compared to those in normal tissue. Among them, UCHL1, brain expressed X-linked 3 (BEX3), and midkine (MDK), which are associated with tumor growth and progression, exhibited a 1.5-fold increase in expression compared to that in normal tissue. IHC analysis of NSCLC tissues showed that only UCHL1 was specifically overexpressed. Additionally, in 48 NSCLC specimens, UCHL1 was specifically upregulated in the cytoplasm and nuclear membrane of tumor cells. Multivariable logistic analysis identified several factors, including smoking, tumor size, and high-grade dysplasia, to be typically associated with UCHL1 overexpression. Survival analyses using The Cancer Genome Atlas (TCGA) datasets revealed that UCHL1 overexpression is substantially associated with poor survival outcomes. Furthermore, a strong association was observed between UCHL1 expression and the clinicopathological features of patients with NSCLC. Conclusion: UCHL1 overexpression was associated with smoking, tumor size, and high-grade dysplasia, which are typically associated with a poor prognosis and survival outcome. These findings suggest that UCHL1 may serve as an effective biomarker of NSCLC.

A Semi-Automated Labeling-Based Data Collection Platform for Golf Swing Analysis

  • Hyojun Lee;Soyeong Park;Yebon Kim;Daehoon Son;Yohan Ko;Yun-hwan Lee;Yeong-hun Kwon;Jong-bae Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.11-21
    • /
    • 2024
  • This study explores the use of virtual reality (VR) technology to identify and label key segments of the golf swing. To address the limitations of existing VR devices, we developed a platform to collect kinematic data from various VR devices using the OpenVR SDK (Software Development Kit) and SteamVR, and developed a semi-automated labeling technique to identify and label temporal changes in kinematic behavior through LSTM (Long Short-Term Memory)-based time series data analysis. The experiment consisted of 80 participants, 20 from each of the following age groups: teenage, young-adult, middle-aged, and elderly, collecting data from five swings each to build a total of 400 kinematic datasets. The proposed technique achieved consistently high accuracy (≥0.94) and F1 Score (≥0.95) across all age groups for the seven main phases of the golf swing. This work aims to lay the groundwork for segmenting exercise data and precisely assessing athletic performance on a segment-by-segment basis, thereby providing personalized feedback to individual users during future education and training.

Object Detection Performance Analysis between On-GPU and On-Board Analysis for Military Domain Images

  • Du-Hwan Hur;Dae-Hyeon Park;Deok-Woong Kim;Jae-Yong Baek;Jun-Hyeong Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.157-164
    • /
    • 2024
  • In this paper, we propose a discussion that the feasibility of deploying a deep learning-based detector on the resource-limited board. Although many studies evaluate the detector on machines with high-performed GPUs, evaluation on the board with limited computation resources is still insufficient. Therefore, in this work, we implement the deep-learning detectors and deploy them on the compact board by parsing and optimizing a detector. To figure out the performance of deep learning based detectors on limited resources, we monitor the performance of several detectors with different H/W resource. On COCO detection datasets, we compare and analyze the evaluation results of detection model in On-Board and the detection model in On-GPU in terms of several metrics with mAP, power consumption, and execution speed (FPS). To demonstrate the effect of applying our detector for the military area, we evaluate them on our dataset consisting of thermal images considering the flight battle scenarios. As a results, we investigate the strength of deep learning-based on-board detector, and show that deep learning-based vision models can contribute in the flight battle scenarios.

Comparing the Performance of a Deep Learning Model (TabPFN) for Predicting River Algal Blooms with Varying Data Composition (데이터 구성에 따른 하천 조류 예측 딥러닝 모형 (TabPFN) 성능 비교)

  • Hyunseok Yang;Jungsu Park
    • Journal of Wetlands Research
    • /
    • v.26 no.3
    • /
    • pp.197-203
    • /
    • 2024
  • The algal blooms in rivers can negatively affect water source management and water treatment processes, necessitating continuous management. In this study, a multi-classification model was developed to predict the concentration of chlorophyll-a (chl-a), one of the key indicators of algal blooms, using Tabular Prior Fitted Networks (TabPFN), a novel deep learning algorithm known for its relatively superior performance on small tabular datasets. The model was developed using daily observation data collected at Buyeo water quality monitoring station from January 1, 2014, to December 31, 2022. The collected data were averaged to construct input data sets with measurement frequencies of 1 day, 3 days, 6 days, 12 days. The performance comparison of the four models, constructed with input data on observation frequencies of 1 day, 3 days, 6 days, and 12 days, showed that the model exhibits stable performance even when the measurement frequency is longer and the number of observations is smaller. The macro average for each model were analyzed as follows: Precision was 0.77, 0.76, 0.83, 0.84; Recall was 0.63, 0.65, 0.66, 0.74; F1-score was 0.67, 0.69, 0.71, 0.78. For the weighted average, Precision was 0.76, 0.77, 0.81, 0.84; Recall was 0.76, 0.78, 0.81, 0.85; F1-score was 0.74, 0.77, 0.80, 0.84. This study demonstrates that the chl-a prediction model constructed using TabPFN exhibits stable performance even with small-scale input data, verifying the feasibility of its application in fields where the input data required for model construction is limited.