• Title/Summary/Keyword: deep similarity

Search Result 226, Processing Time 0.023 seconds

Deep Learning-based Spine Segmentation Technique Using the Center Point of the Spine and Modified U-Net (척추의 중심점과 Modified U-Net을 활용한 딥러닝 기반 척추 자동 분할)

  • Sungjoo Lim;Hwiyoung Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.139-146
    • /
    • 2023
  • Osteoporosis is a disease in which the risk of bone fractures increases due to a decrease in bone density caused by aging. Osteoporosis is diagnosed by measuring bone density in the total hip, femoral neck, and lumbar spine. To accurately measure bone density in the lumbar spine, the vertebral region must be segmented from the lumbar X-ray image. Deep learning-based automatic spinal segmentation methods can provide fast and precise information about the vertebral region. In this study, we used 695 lumbar spine images as training and test datasets for a deep learning segmentation model. We proposed a lumbar automatic segmentation model, CM-Net, which combines the center point of the spine and the modified U-Net network. As a result, the average Dice Similarity Coefficient(DSC) was 0.974, precision was 0.916, recall was 0.906, accuracy was 0.998, and Area under the Precision-Recall Curve (AUPRC) was 0.912. This study demonstrates a high-performance automatic segmentation model for lumbar X-ray images, which overcomes noise such as spinal fractures and implants. Furthermore, we can perform accurate measurement of bone density on lumbar X-ray images using an automatic segmentation methodology for the spine, which can prevent the risk of compression fractures at an early stage and improve the accuracy and efficiency of osteoporosis diagnosis.

Multi-task Deep Neural Network Model for T1CE Image Synthesis and Tumor Region Segmentation in Glioblastoma Patients (교모세포종 환자의 T1CE 영상 생성 및 암 영역분할을 위한 멀티 태스크 심층신경망 모델)

  • Kim, Eunjin;Park, Hyunjin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.474-476
    • /
    • 2021
  • Glioblastoma is the most common brain malignancies arising from glial cells. Early diagnosis and treatment plan establishment are important, and cancer is diagnosed mainly through T1CE imaging through injection of a contrast agent. However, the risk of injection of gadolinium-based contrast agents is increasing recently. Region segmentation that marks cancer regions in medical images plays a key role in CAD systems, and deep neural network models for synthesizing new images are also being studied. In this study, we propose a model that simultaneously learns the generation of T1CE images and segmentation of cancer regions. The performance of the proposed model is evaluated using similarity measurements including mean square error and peak signal-to-noise ratio, and shows average result values of 21 and 39 dB.

  • PDF

A Comprehensive Analysis of Deformable Image Registration Methods for CT Imaging

  • Kang Houn Lee;Young Nam Kang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.303-314
    • /
    • 2023
  • This study aimed to assess the practical feasibility of advanced deformable image registration (DIR) algorithms in radiotherapy by employing two distinct datasets. The first dataset included 14 4D lung CT scans and 31 head and neck CT scans. In the 4D lung CT dataset, we employed the DIR algorithm to register organs at risk and tumors based on respiratory phases. The second dataset comprised pre-, mid-, and post-treatment CT images of the head and neck region, along with organ at risk and tumor delineations. These images underwent registration using the DIR algorithm, and Dice similarity coefficients (DSCs) were compared. In the 4D lung CT dataset, registration accuracy was evaluated for the spinal cord, lung, lung nodules, esophagus, and tumors. The average DSCs for the non-learning-based SyN and NiftyReg algorithms were 0.92±0.07 and 0.88±0.09, respectively. Deep learning methods, namely Voxelmorph, Cyclemorph, and Transmorph, achieved average DSCs of 0.90±0.07, 0.91±0.04, and 0.89±0.05, respectively. For the head and neck CT dataset, the average DSCs for SyN and NiftyReg were 0.82±0.04 and 0.79±0.05, respectively, while Voxelmorph, Cyclemorph, and Transmorph showed average DSCs of 0.80±0.08, 0.78±0.11, and 0.78±0.09, respectively. Additionally, the deep learning DIR algorithms demonstrated faster transformation times compared to other models, including commercial and conventional mathematical algorithms (Voxelmorph: 0.36 sec/images, Cyclemorph: 0.3 sec/images, Transmorph: 5.1 sec/images, SyN: 140 sec/images, NiftyReg: 40.2 sec/images). In conclusion, this study highlights the varying clinical applicability of deep learning-based DIR methods in different anatomical regions. While challenges were encountered in head and neck CT registrations, 4D lung CT registrations exhibited favorable results, indicating the potential for clinical implementation. Further research and development in DIR algorithms tailored to specific anatomical regions are warranted to improve the overall clinical utility of these methods.

Enhanced Lung Cancer Segmentation with Deep Supervision and Hybrid Lesion Focal Loss in Chest CT Images (흉부 CT 영상에서 심층 감독 및 하이브리드 병변 초점 손실 함수를 활용한 폐암 분할 개선)

  • Min Jin Lee;Yoon-Seon Oh;Helen Hong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.11-17
    • /
    • 2024
  • Lung cancer segmentation in chest CT images is challenging due to the varying sizes of tumors and the presence of surrounding structures with similar intensity values. To address these issues, we propose a lung cancer segmentation network that incorporates deep supervision and utilizes UNet3+ as the backbone. Additionally, we propose a hybrid lesion focal loss function comprising three components: pixel-based, region-based, and shape-based, which allows us to focus on the smaller tumor regions relative to the background and consider shape information for handling ambiguous boundaries. We validate our proposed method through comparative experiments with UNet and UNet3+ and demonstrate that our proposed method achieves superior performance in terms of Dice Similarity Coefficient (DSC) for tumors of all sizes.

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

Deep Learning-Based Lumen and Vessel Segmentation of Intravascular Ultrasound Images in Coronary Artery Disease

  • Gyu-Jun Jeong;Gaeun Lee;June-Goo Lee;Soo-Jin Kang
    • Korean Circulation Journal
    • /
    • v.54 no.1
    • /
    • pp.30-39
    • /
    • 2024
  • Background and Objectives: Intravascular ultrasound (IVUS) evaluation of coronary artery morphology is based on the lumen and vessel segmentation. This study aimed to develop an automatic segmentation algorithm and validate the performances for measuring quantitative IVUS parameters. Methods: A total of 1,063 patients were randomly assigned, with a ratio of 4:1 to the training and test sets. The independent data set of 111 IVUS pullbacks was obtained to assess the vessel-level performance. The lumen and external elastic membrane (EEM) boundaries were labeled manually in every IVUS frame with a 0.2-mm interval. The Efficient-UNet was utilized for the automatic segmentation of IVUS images. Results: At the frame-level, Efficient-UNet showed a high dice similarity coefficient (DSC, 0.93±0.05) and Jaccard index (JI, 0.87±0.08) for lumen segmentation, and demonstrated a high DSC (0.97±0.03) and JI (0.94±0.04) for EEM segmentation. At the vessel-level, there were close correlations between model-derived vs. experts-measured IVUS parameters; minimal lumen image area (r=0.92), EEM area (r=0.88), lumen volume (r=0.99) and plaque volume (r=0.95). The agreement between model-derived vs. expert-measured minimal lumen area was similarly excellent compared to the experts' agreement. The model-based lumen and EEM segmentation for a 20-mm lesion segment required 13.2 seconds, whereas manual segmentation with a 0.2-mm interval by an expert took 187.5 minutes on average. Conclusions: The deep learning models can accurately and quickly delineate vascular geometry. The artificial intelligence-based methodology may support clinicians' decision-making by real-time application in the catheterization laboratory.

An Archaeochemical Microstructural Study on Koryo Inlaid Celadon

  • Ham, Seung-Wook;Shim, Il-wun;Lee, Young-Eun;Kang, Ji-Yoon;Koh, Kyong-Shin
    • Bulletin of the Korean Chemical Society
    • /
    • v.23 no.11
    • /
    • pp.1531-1540
    • /
    • 2002
  • With the invention of the inlaying technique for celadon in the latter half of the 12th century, the Koryo potters reached a new height of artistic and scientific achievement in ceramics chemical technology. Inlaid celadon shards, collected in 1991 during the surface investigation of Kangjin kilns found on the southwestern shore of South Korea, were imbedded in epoxy resin and polished for cross-section examination. Backscattered electron images were taken with an electron microprobe equipped with an energy dispersive spectrometer. The spectrometer was also used to determine the composition of micro-areas. Porcelain stone, weathered rock of quartz, mica, and feldspar composition were found to be the raw material for the body and important components in the glaze and white inlay. The close similarity between glaze and black inlay in the microstructure suggests that the glaze material was modified by adding clay with high iron content, such as biotite, for use as black inlay. The deep soft translucent quality of celadon glaze is brought about by its microstructure of bubbles, remnant and devitrified minerals, and the schlieren effect.

Continual Learning using Data Similarity (데이터 유사도를 이용한 지속적 학습방법)

  • Park, Seong-Hyeon;Kang, Seok-Hoon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.514-522
    • /
    • 2020
  • In Continuous Learning environment, we identify that the Catastrophic Forgetting phenomenon, which forgets the information of previously learned data, occurs easily between data having different domains. To control this phenomenon, we introduce how to measure the relationship between previously learned data and newly learned data through the distribution of the neural network's output, and how to use these measurements to mitigate the Catastrophic Forcing phenomenon. MNIST and EMNIST data were used for evaluation, and experiments showed an average 22.37% improvement in accuracy for previous data.

Relationship Maturity Model with SKT Case: Dancing with Knowledge Partners (관계 성숙 모형과 SKT사례: 지식 파트너와 함께 춤을)

  • Kwon, Tae H.;Lee, Kang Up;Choi, Jaewoong
    • Knowledge Management Research
    • /
    • v.8 no.1
    • /
    • pp.15-28
    • /
    • 2007
  • In the age where the Internet changes everything, even the earth has become flat. The boarders between nations, locations, times, and industries are not meaningful, and no single company can do the whole process well. Therefore, various types of 'Value network' and 'Relation web' emerge for moving first and learning fast. Both the relationship maturity model (RMM) proposed and the partnership management initiatives at SKT demonstrate that the concept is important, and that the final goal can be reached only through a series of critical outcome at each phase. In particular, recognizing as core infrastructures various online/offline channels, deep trust, and rich communications is an important finding for a successful relationship management. Also, related literatures suggest the following key factors to be influential in more than two phases: professionalism including expertise, similarity, channel infrastructure, trustful/trustworthy, and absorptive capacity. Based on these findings, future efforts need to be put on the research & development of related measurement and management tools. It is hoped that more dance with their partners through these efforts.

  • PDF

A Novel Text to Image Conversion Method Using Word2Vec and Generative Adversarial Networks

  • LIU, XINRUI;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.401-403
    • /
    • 2019
  • In this paper, we propose a generative adversarial networks (GAN) based text-to-image generating method. In many natural language processing tasks, which word expressions are determined by their term frequency -inverse document frequency scores. Word2Vec is a type of neural network model that, in the case of an unlabeled corpus, produces a vector that expresses semantics for words in the corpus and an image is generated by GAN training according to the obtained vector. Thanks to the understanding of the word we can generate higher and more realistic images. Our GAN structure is based on deep convolution neural networks and pixel recurrent neural networks. Comparing the generated image with the real image, we get about 88% similarity on the Oxford-102 flowers dataset.