• Title/Summary/Keyword: 피사체 크기

Search Result 44, Processing Time 0.019 seconds

Adaptive Image Rescaling for Weakly Contrast-Enhanced Lesions in Dedicated Breast CT: A Phantom Study (약하게 조영증강된 병변의 유방 전용 CT 영상의 대조도 개선을 위한 적응적 영상 재조정 방법: 팬텀 연구)

  • Bitbyeol Kim;Ho Kyung Kim;Jinsung Kim;Yongkan Ki;Ji Hyeon Joo;Hosang Jeon;Dahl Park;Wontaek Kim;Jiho Nam;Dong Hyeon Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.6
    • /
    • pp.1477-1492
    • /
    • 2021
  • Purpose Dedicated breast CT is an emerging volumetric X-ray imaging modality for diagnosis that does not require any painful breast compression. To improve the detection rate of weakly enhanced lesions, an adaptive image rescaling (AIR) technique was proposed. Materials and Methods Two disks containing five identical holes and five holes of different diameters were scanned using 60/100 kVp to obtain single-energy CT (SECT), dual-energy CT (DECT), and AIR images. A piece of pork was also scanned as a subclinical trial. The image quality was evaluated using image contrast and contrast-to-noise ratio (CNR). The difference of imaging performances was confirmed using student's t test. Results Total mean image contrast of AIR (0.70) reached 74.5% of that of DECT (0.94) and was higher than that of SECT (0.22) by 318.2%. Total mean CNR of AIR (5.08) was 35.5% of that of SECT (14.30) and was higher than that of DECT (2.28) by 222.8%. A similar trend was observed in the subclinical study. Conclusion The results demonstrated superior image contrast of AIR over SECT, and its higher overall image quality compared to DECT with half the exposure. Therefore, AIR seems to have the potential to improve the detectability of lesions with dedicated breast CT.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).

Semi-automated Tractography Analysis using a Allen Mouse Brain Atlas : Comparing DTI Acquisition between NEX and SNR (알렌 마우스 브레인 아틀라스를 이용한 반자동 신경섬유지도 분석 : 여기수와 신호대잡음비간의 DTI 획득 비교)

  • Im, Sang-Jin;Baek, Hyeon-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.2
    • /
    • pp.157-168
    • /
    • 2020
  • Advancements in segmentation methodology has made automatic segmentation of brain structures using structural images accurate and consistent. One method of automatic segmentation, which involves registering atlas information from template space to subject space, requires a high quality atlas with accurate boundaries for consistent segmentation. The Allen Mouse Brain Atlas, which has been widely accepted as a high quality reference of the mouse brain, has been used in various segmentations and can provide accurate coordinates and boundaries of mouse brain structures for tractography. Through probabilistic tractography, diffusion tensor images can be used to map comprehensive neuronal network of white matter pathways of the brain. Comparisons between neural networks of mouse and human brains showed that various clinical tests on mouse models were able to simulate disease pathology of human brains, increasing the importance of clinical mouse brain studies. However, differences between brain size of human and mouse brain has made it difficult to achieve the necessary image quality for analysis and the conditions for sufficient image quality such as a long scan time makes using live samples unrealistic. In order to secure a mouse brain image with a sufficient scan time, an Ex-vivo experiment of a mouse brain was conducted for this study. Using FSL, a tool for analyzing tensor images, we proposed a semi-automated segmentation and tractography analysis pipeline of the mouse brain and applied it to various mouse models. Also, in order to determine the useful signal-to-noise ratio of the diffusion tensor image acquired for the tractography analysis, images with various excitation numbers were compared.