• Title/Summary/Keyword: segmented region

Search Result 374, Processing Time 0.03 seconds

Automatic Extraction of Ascending Aorta and Ostium in Cardiac CT Angiography Images (심장 CT 혈관 조영 영상에서 대동맥 및 심문 자동 검출)

  • Kim, Hye-Ryun;Kang, Mi-Sun;Kim, Myoung-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.1
    • /
    • pp.49-55
    • /
    • 2017
  • Computed tomographic angiography (CTA) is widely used in the diagnosis and treatment of coronary artery disease because it shows not only the whole anatomical structure of the cardiovascular three-dimensionally but also provides information on the lesion and type of plaque. However, due to the large size of the image, there is a limitation in manually extracting coronary arteries, and related researches are performed to automatically extract coronary arteries accurately. As the coronary artery originate from the ascending aorta, the ascending aorta and ostium should be detected to extract the coronary tree accurately. In this paper, we propose an automatic segmentation for the ostium as a starting structure of coronary artery in CTA. First, the region of the ascending aorta is initially detected by using Hough circle transform based on the relative position and size of the ascending aorta. Second, the volume of interest is defined to reduce the search range based on the initial area. Third, the refined ascending aorta is segmented by using a two-dimensional geodesic active contour. Finally, the two ostia are detected within the region of the refined ascending aorta. For the evaluation of our method, we measured the Euclidean distance between the result and the ground truths annotated manually by medical experts in 20 CTA images. The experimental results showed that the ostia were accurately detected.

Automatic Segmentation of Trabecular Bone Based on Sphere Fitting for Micro-CT Bone Analysis (마이크로-CT 뼈 영상 분석을 위한 구 정합 기반 해면뼈의 자동 분할)

  • Kang, Sun Kyung;Kim, Young Un;Jung, Sung Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.329-334
    • /
    • 2014
  • In this study, a new method that automatically segments trabecular bone for its morphological analysis using micro-computed tomography imaging was proposed. In the proposed method, the bone region was extracted using a threshold value, and the outer boundary of the bone was detected. The sphere of maximum size with the corresponding voxel as the center was obtained by applying the sphere-fitting method to each voxel of the bone region. If this sphere includes the outer boundary of the bone, the voxels included in the sphere are classified as cortical bone; otherwise, they are classified as trabecular bone. The proposed method was applied to images of the distal femurs of 15 mice, and comparative experiments, with results manually divided by a person, were performed. Four morphological parameters-BV/TV, Tb.Th, Tb.Sp, and Tb.N-for the segmented trabecular bone were measured. The results were compared by regression analysis and the Bland-Altman method; BV/TV, Tb.Th, Tb.Sp, and Tb.N were all in the credible range. In addition, not only can the sphere-fitting method be simply implemented, but trabecular bone can also be divided precisely by using the three-dimensional information.

Illumination Estimation Based on Nonnegative Matrix Factorization with Dominant Chromaticity Analysis (주색도 분석을 적용한 비음수 행렬 분해 기반의 광원 추정)

  • Lee, Ji-Heon;Kim, Dae-Chul;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.8
    • /
    • pp.89-96
    • /
    • 2015
  • Human visual system has chromatic adaptation to determine the color of an object regardless of illumination, whereas digital camera records illumination and reflectance together, giving the color appearance of the scene varied under different illumination. NMFsc(nonnegative matrix factorization with sparseness constraint) was recently introduced to estimate original object color by using sparseness constraint. In NMFsc, low sparseness constraint is used to estimate illumination and high sparseness constraint is used to estimate reflectance. However, NMFsc has an illumination estimation error for images with large uniform area, which is considered as dominant chromaticity. To overcome the defects of NMFsc, illumination estimation via nonnegative matrix factorization with dominant chromaticity image is proposed. First, image is converted to chromaticity color space and analyzed by chromaticity histogram. Chromaticity histogram segments the original image into similar chromaticity images. A segmented region with the lowest standard deviation is determined as dominant chromaticity region. Next, dominant chromaticity is removed in the original image. Then, illumination estimation using nonnegative matrix factorization is performed on the image without dominant chromaticity. To evaluate the proposed method, experimental results are analyzed by average angular error in the real world dataset and it has shown that the proposed method with 5.5 average angular error achieve better illuminant estimation over the previous method with 5.7 average angular error.

Salient Object Extraction from Video Sequences using Contrast Map and Motion Information (대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출)

  • Kwak, Soo-Yeong;Ko, Byoung-Chul;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1121-1135
    • /
    • 2005
  • This paper proposes a moving object extraction method using the contrast map and salient points. In order to make the contrast map, we generate three-feature maps such as luminance map, color map and directional map and extract salient points from an image. By using these features, we can decide the Attention Window(AW) location easily The purpose of the AW is to remove the useless regions in the image such as background as well as to reduce the amount of image processing. To create the exact location and flexible size of the AW, we use motion feature instead of pre-assumptions or heuristic parameters. After determining of the AW, we find the difference of edge to inner area from the AW. Then, we can extract horizontal candidate region and vortical candidate region. After finding both horizontal and vertical candidates, intersection regions through logical AND operation are further processed by morphological operations. The proposed algorithm has been applied to many video sequences which have static background like surveillance type of video sequences. The moving object was quite well segmented with accurate boundaries.

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Hierarchical Non-Rigid Registration by Bodily Tissue-based Segmentation : Application to the Visible Human Cross-sectional Color Images and CT Legs Images (조직 기반 계층적 non-rigid 정합: Visible Human 컬러 단면 영상과 CT 다리 영상에 적용)

  • Kim, Gye-Hyun;Lee, Ho;Kim, Dong-Sung;Kang, Heung-Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.4
    • /
    • pp.259-266
    • /
    • 2003
  • Non-rigid registration between different modality images with shape deformation can be used to diagnosis and study for inter-patient image registration, longitudinal intra-patient registration, and registration between a patient image and an atlas image. This paper proposes a hierarchical registration method using bodily tissue based segmentation for registration between color images and CT images of the Visible Human leg areas. The cross-sectional color images and the axial CT images are segmented into three distinctive bodily tissue regions, respectively: fat, muscle, and bone. Each region is separately registered hierarchically. Bounding boxes containing bodily tissue regions in different modalities are initially registered. Then, boundaries of the regions are globally registered within range of searching space. Local boundary segments of the regions are further registered for non-rigid registration of the sampled boundary points. Non-rigid registration parameters for the un-sampled points are interpolated linearly. Such hierarchical approach enables the method to register images efficiently. Moreover, registration of visibly distinct bodily tissue regions provides accurate and robust result in region boundaries and inside the regions.

Optimization of 1.2 kV 4H-SiC MOSFETs with Vertical Variation Doping Structure (Vertical Variation Doping 구조를 도입한 1.2 kV 4H-SiC MOSFET 최적화)

  • Ye-Jin Kim;Seung-Hyun Park;Tae-Hee Lee;Ji-Soo Choi;Se-Rim Park;Geon-Hee Lee;Jong-Min Oh;Weon Ho Shin;Sang-Mo Koo
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.37 no.3
    • /
    • pp.332-336
    • /
    • 2024
  • High-energy bandgap material silicon carbide (SiC) is gaining attention as a next-generation power semiconductor material, and in particular, SiC-based MOSFETs are developed as representative power semiconductors to increase the breakdown voltage (BV) of conventional planar structures. However, as the size of SJ (Super Junction) MOSFET devices decreases and the depth of pillars increases, it becomes challenging to uniformly form the doping concentration of pillars. Therefore, a structure with different doping concentrations segmented within the pillar is being researched. Using Silvaco TCAD simulation, a SJ VVD (vertical variation doping profile) MOSFET with three different doping concentrations in the pillar was studied. Simulations were conducted for the width of the pillar and the doping concentration of N-epi, revealing that as the width of the pillar increases, the depletion region widens, leading to an increase in on-specific resistance (Ron,sp) and breakdown voltage (BV). Additionally, as the doping concentration of N-epi increases, the number of carriers increases, and the depletion region narrows, resulting in a decrease in Ron,sp and BV. The optimized SJ VVD MOSFET exhibits a very high figure of merit (BFOM) of 13,400 KW/cm2, indicating excellent performance characteristics and suggesting its potential as a next-generation highperformance power device suitable for practical applications.

Quantitative Comparisons in $^{18}F$-FDG PET Images: PET/MR VS PET/CT ($^{18}F$-FDG PET 영상의 정량적 비교: PET/MR VS PET/CT)

  • Lee, Moo Seok;Im, Young Hyun;Kim, Jae Hwan;Choe, Gyu O
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.68-80
    • /
    • 2012
  • Purpose : More recently, combined PET/MR scanners have been developed in which the MR data can be used for both anatometabolic image formation and attenuation correction of the PET data. For quantitative PET information, correction of tissue photon attenuation is mandatory. The attenuation map is obtained from the CT scan in the PET/CT. In the case of PET/MR, the attenuation map can be calculated from the MR image. The purpose of this study was to assess the quantitative differences between MR-based and CT-based attenuation corrected PET images. Materials and Methods : Using the uniform cylinder phantom of distilled water which has 199.8 MBq of $^{18}F$-FDG put into the phantom, we studied the effect of MR-based and CT-based attenuation corrected PET images, of the PET-CT using time of flight (TOF) and non-TOF iterative reconstruction. The images were acquired from 60 minutes at 15-minute intervals. Region of interests were drawn over 70% from the center of the image, and the Scanners' analysis software tools calculated both maximum and mean SUV. These data were analyzed by one way-anova test and Bland-Altman analysis. MR images are segmented into three classes(not including bone), and each class is assigned to each region based on the expected average attenuation of each region. For clinical diagnostic purpose, PET/MR and PET/CT images were acquired in 23 patients (Ingenuity TF PET/MR, Gemini TF64). PET/CT scans were performed approximately 33.8 minutes after the beginnig of the PET/MR scans. Region of interests were drawn over 9 regions of interest(lung, liver, spleen, bone), and the Scanners' analysis software tools calculated both maximum and mean SUV. The SUVs from 9 regions of interest in MR-based PET images and in CT-based PET images were compared. These data were analyzed by paired t test and Bland-Altman analysis. Results : In phantom study, MR-based attenuation corrected PET images generally showed slightly lower -0.36~-0.15 SUVs than CT-based attenuation corrected PET images (p<0.05). In clinical study, MR-based attenuation corrected PET images generally showed slightly lower SUVs than CT-based attenuation corrected PET images (excepting left middle lung and transverse Lumbar) (p<0.05). And percent differences were -8.01.79% lower for the PET/MR images than for the PET/CT images. (excepting lung) Based on the Bland-Altman method, the agreement between the two methods was considered good. Conclusion : PET/MR confirms generally lower SUVs than PET/CT. But, there were no difference in the clinical interpretations made by the quantitative comparisons with both type of attenuation map.

  • PDF

Development of the Multi-Parametric Mapping Software Based on Functional Maps to Determine the Clinical Target Volumes (임상표적체적 결정을 위한 기능 영상 기반 생물학적 인자 맵핑 소프트웨어 개발)

  • Park, Ji-Yeon;Jung, Won-Gyun;Lee, Jeong-Woo;Lee, Kyoung-Nam;Ahn, Kook-Jin;Hong, Se-Mie;Juh, Ra-Hyeong;Choe, Bo-Young;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.153-164
    • /
    • 2010
  • To determine the clinical target volumes considering vascularity and cellularity of tumors, the software was developed for mapping of the analyzed biological clinical target volumes on anatomical images using regional cerebral blood volume (rCBV) maps and apparent diffusion coefficient (ADC) maps. The program provides the functions for integrated registrations using mutual information, affine transform and non-rigid registration. The registration accuracy is evaluated by the calculation of the overlapped ratio of segmented bone regions and average distance difference of contours between reference and registered images. The performance of the developed software was tested using multimodal images of a patient who has the residual tumor of high grade gliomas. Registration accuracy of about 74% and average 2.3 mm distance difference were calculated by the evaluation method of bone segmentation and contour extraction. The registration accuracy can be improved as higher as 4% by the manual adjustment functions. Advanced MR images are analyzed using color maps for rCBV maps and quantitative calculation based on region of interest (ROI) for ADC maps. Then, multi-parameters on the same voxels are plotted on plane and constitute the multi-functional parametric maps of which x and y axis representing rCBV and ADC values. According to the distributions of functional parameters, tumor regions showing the higher vascularity and cellularity are categorized according to the criteria corresponding malignant gliomas. Determined volumes reflecting pathological and physiological characteristics of tumors are marked on anatomical images. By applying the multi-functional images, errors arising from using one type of image would be reduced and local regions representing higher probability as tumor cells would be determined for radiation treatment plan. Biological tumor characteristics can be expressed using image registration and multi-functional parametric maps in the developed software. The software can be considered to delineate clinical target volumes using advanced MR images with anatomical images.