• Title/Summary/Keyword: 다중 모달리티

Search Result 24, Processing Time 0.023 seconds

Volume Image Processing for Surface Based MRI-PET Registration (표면 정보 기반 MRI-PET 영상 정합을 위한 볼륨 영상 처리)

  • Jung, Myung-Jin;Choi, Yoo-Joo;Kim, Min-Jeong;Kim, Myoung-Hee
    • Annual Conference of KIPS
    • /
    • 2002.11a
    • /
    • pp.475-478
    • /
    • 2002
  • 영상 정합이란 영상들을 배열하여 대응되는 특성을 연관시키는 과정으로, 서로 다른 정보를 결합하여 상호 보완적이고 복합적인 새로운 정보를 생성한다는 점에서 유용하다. 본 논문에서는 MRI와 PET 뇌 영상을 표면 정보에 기반하여 정합하기 위한 영상 처리 방법에 대하여 연구하였다. 특히 정합을 위한 특징점 집합을 샘플링하는데 있어서 표면 곡률 정보를 사용한 샘플링 기법을 적용하고, 실 관심 객체의 볼륨 크기에 기반한 바운딩 박스를 생성하여 기하 변환을 수행함으로써 표면정보기반 다중모달리티 영상 정합을 위한 보다 효과적인 영상 처리 결과를 얻도록 하였다.

  • PDF

Non-liner brain image registration based on moment and free-form deformation (모멘트 및 free-form 변형기반 비선형 뇌영상 정합)

  • 김민정;최유주;김명희
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2004.05a
    • /
    • pp.271-274
    • /
    • 2004
  • 영상정합을 통한 의료영상 분석방법들 중 동일환자에 대한 선형적 다중모달리티 정합이 널리 이용되고 있다. 그러나 실제적으로 여러 종류의 환자영상 취득이 어렵거나 해부학적 영상정보가 손실되는 경우가 적지 않다 본 논문에서는 표준 형상을 가지는 정상인 해부학적 뇌영상에 대한 환자 기능적 뇌영상의 정합방법을 제안한다. 먼저 두 영상간 모멘트 정보 매칭 및 초기선형 변환을 수행하고, 3차원 B zier 함수 기반 free-form 변형기법을 이용한 비선형 정합을 수행하여 정합 영상간 형상 차이를 최소화한다 제안방법은 환자 기능영상의 해부학적 분석 뿐 아니라 시술전-시술중 영상정합을 통한 영상유도시술에도 확장 적용될 수 있다.

  • PDF

'EVE-SoundTM' Toolkit for Interactive Sound in Virtual Environment (가상환경의 인터랙티브 사운드를 위한 'EVE-SoundTM' 툴킷)

  • Nam, Yang-Hee;Sung, Suk-Jeong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.273-280
    • /
    • 2007
  • This paper presents a new 3D sound toolkit called $EVE-Sound^{TM}$ that consists of pre-processing tool for environment simplification preserving sound effect and 3D sound API for real-time rendering. It is designed so that it can allow users to interact with complex 3D virtual environments by audio-visual modalities. $EVE-Sound^{TM}$ toolkit would serve two different types of users: high-level programmers who need an easy-to-use sound API for developing realistic 3D audio-visually rendered applications, and the researchers in 3D sound field who need to experiment with or develop new algorithms while not wanting to re-write all the required code from scratch. An interactive virtual environment application is created with the sound engine constructed using $EVE-Sound^{TM}$ toolkit, and it shows the real-time audio-visual rendering performance and the applicability of proposed $EVE-Sound^{TM}$ for building interactive applications with complex 3D environments.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.