• Title/Summary/Keyword: 최적화질

Search Result 205, Processing Time 0.03 seconds

Evaluation of Image Qualities for a Digital X-ray Imaging System Based on Gd$_2$O$_2$S(Tb) Scintillator and Photosensor Array by Using a Monte Carlo Imaging Simulation Code (몬테카를로 영상모의실험 코드를 이용한 Gd$_2$O$_2$S(Tb) 섬광체 및 광센서 어레이 기반 디지털 X-선 영상시스템의 화질평가)

  • Jung, Man-Hee;Jung, In-Bum;Park, Ju-Hee;Oh, Ji-Eun;Cho, Hyo-Sung;Han, Bong-Soo;Kim, Sin;Lee, Bong-Soo;Kim, Ho-Kyung
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.253-259
    • /
    • 2004
  • in this study, we developed a Monte Carlo imaging simulation code written by the visual C$\^$++/ programing language for design optimization of a digital X-ray imaging system. As a digital X-ray imaging system, we considered a Gd$_2$O$_2$S(Tb) scintillator and a photosensor array, and included a 2D parallel grid to simulate general test renditions. The interactions between X-ray beams and the system structure, the behavior of lights generated in the scintillator, and their collection in the photosensor array were simulated by using the Monte Carlo method. The scintillator thickness and the photosensor array pitch were assumed to 66$\mu\textrm{m}$ and 48$\mu\textrm{m}$, respertively, and the pixel format was set to 256 x 256. Using the code, we obtained X-ray images under various simulation conditions, and evaluated their image qualities through the calculations of SNR (signal-to-noise ratio), MTF (modulation transfer function), NPS (noise power spectrum), DQE (detective quantum efficiency). The image simulation code developed in this study can be applied effectively for a variety of digital X-ray imaging systems for their design optimization on various design parameters.

A Study on the Use of Active Protocol Using the Change of Pitch and Rotation Time in PET/CT (PET/CT에서 Pitch와 Rotation Time의 변화를 이용한 능동적인 프로토콜 사용에 대한 연구)

  • Jang, Eui Sun;Kwak, In Suk;Park, Sun Myung;Choi, Choon Ki;Lee, Hyuk;Kim, Soo Young;Choi, Sung Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.2
    • /
    • pp.67-71
    • /
    • 2013
  • Purpose: The Change of CT exposure condition have a effect on image quality and patient exposure dose. In this study, we evaluated effect CT image quality and SUV when CT parameters (Pitch, Rotation time) were changed. Materials and Methods: Discovery Ste (GE, USA) was used as a PET/CT scanner. Using GE QA Phantom and AAPM CT Performance Phantom for evaluate Noise of CT image. Images are acquired by using 24 combinations that four stages pitch (0.562, 0.938, 1.375, 1.75:1) and six stages X-ray tube rotation time (0.5s-1.0s). PET images are acquired using 1994 NEMA PET Phantom ($^{18}F-FDG$ 5.3 kBq/mL, 2.5 min/frame). For noise test, noise are evaluated by standard deviation of each image's CT numbers. And then we used expectation noise according to change of DLP (Dose Length Product) to experimental noise ratio for index of effectiveness. For spatial resolution test, we confirmed that it is possible to identify to 1.0 mm size of the holes at the AAPM CT Performance Phantom. Finally we evaluated each 24 image's SUV. Results: Noise efficiency were 1.00, 1.03, 1.01, 0.96 and 1.00, 1.04, 1.02, 0.97 when pitch changes at the QA Phantom and AAPM Phantom. In case of X-ray tube rotation time changes, 0.99, 1.02, 1.00, 1.00, 0.99, 0.99 and 1.01, 1.01, 0.99, 1.01, 1.01, 1.01 at the QA Phantom and AAPM Phantom. We could identify 1.0 mm size of the holes all 24 images. Also, there were no significant change of SUV and all image's average SUV were 1.1. Conclusion: 1.75:1 pitch is the most effective value at the CT image evaluation according to pitch change and It doesn't affect to the spatial resolution and SUV. However, the change of rotation time doesn't affect anything. So, we recommend to use the effective pitch like 1.75:1 and adequate X-ray tube rotation time according to patient size.

  • PDF

A Study on the Generation of Ultrasonic Binary Image for Image Segmentation (Image segmentation을 위한 초음파 이진 영상 생성에 관한 연구)

  • Choe, Heung-Ho;Yuk, In-Su
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.6
    • /
    • pp.571-575
    • /
    • 1998
  • One of the most significant features of diagnostic ultrasonic instruments is to provide real time information of the soft tissues movements. Echocardiogram has been widely used for diagnosis of heart diseases since it is able to show real time images of heart valves and walls. However, the currently used ultrasonic images are deteriorated due to presence of speckle noises and image dropout. Therefore, it is very important to develop a new technique which can enhance ultrasonic images. In this study, a technique which extracts enhanced binary images in echocardiograms was proposed. For this purpose, a digital moving image file was made from analog echocardiogram, then it was stored as 8-bit gray-level for each frame. For an efficient image processing, the region containing the heat septum and tricuspid valve was selected as the region of interest(ROI). Image enhancement filters and morphology filters were used to reduce speckle noises in the images. The proposed procedure in this paper resulted in binary images with enhanced contour compared to those form the conventional threshold technique and original image processing technique which can be further implemented for the quantitative analysis of the left ventricular wall motion in echocardiogram by easy detection of the heart wall contours.

  • PDF

Motion Vector Estimation using T-shape Diamond Search Algorithm (TDS 기법을 이용한 움직임 벡터 추정)

  • Kim, Ki-Young;Jung, Mi-Gyoung
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.309-316
    • /
    • 2004
  • In this paper, we proposed the TDS(T-shape Diamond Search) based on the directions of above, below, left and right points to estimate the motion vector fast and more correctly in this method, we exploit the facts that most motion vectors are enclosed in a circular region with a radius of 2 fixels around search center(0,0). At first, the 4 points in the above, below, left and right around the search center is calculated to decide the point of the MBD(Minimum Block Distortion). And then w. above point of the MBD is checked to calculate the SAD. If the SAD of the above point is less than the previous MBD, this process is repeated. Otherwise, the right and left points of MBD are calculated to decide The points that have the MBD between right point and left point. Above processes are repeated to the predicted direction for motion estimation. Especially, if the motions of image are concentrated in the crossing directions, the points of other directions are omitted. As a result, we can estimate motion vectors fast. Experiments show that the speedup improvement of the proposed algorithm over Diamond Search algorithm(DS) and HEXgon Based Search(HEXBS) can be up to 38∼50% while maintaining similar image Quality.

Spatio-temporal Mode Selection Methods of Fast H.264 Using Multiple Reference Frames (다중 참조 영상을 이용한 고속 H.264의 움직임 예측 모드 선택 기법)

  • Kwon, Jae-Hyun;Kang, Min-Jung;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.3C
    • /
    • pp.247-254
    • /
    • 2008
  • H.264 provides a good coding efficiency compared with existing video coding standards, H.263, MPEG-4, based on the use of multiple reference frame for variable block size motion estimation, quarter-pixel motion estimation and compensation, $4{\times}4$ integer DCT, rate-distortion optimization, and etc. However, many modules used to increase its performance also require H.264 to have increased complexity so that fast algorithms are to be implemented as practical approach. In this paper, among many approaches, fast mode decision algorithm by skipping variable block size motion estimation and spatial-predictive coding, which occupies most encoder complexity, is proposed. This approach takes advantages of temporal and spatial properties of fast mode selection techniques. Experimental results demonstrate that the proposed approach can save encoding time up to 65% compared with the H.264 standard while maintaining the visual perspectives.

A Comparative Study of Image Quality and Radiation Dose according to Variable Added Filter and Radiation Exposure in Diagnostic X-Ray Radiography (진단용 X-선 촬영시 부가 필터 및 노출의 변화에 따른 피폭선량 및 영상 화질 비교 연구)

  • Choi, Nam-Gil;Seong, Ho-Jin;Jeon, Joo-Seop;Kim, Youn-Hyun;Seong, Dong-Ook
    • Journal of Radiation Protection and Research
    • /
    • v.37 no.1
    • /
    • pp.25-34
    • /
    • 2012
  • To know which parameters were acceptable for achieving lowest radiation exposure to the patients and highest image quality at the diagnostic X-ray radiography, we measured the patient radiation dose and image quality in transmitted PACS (Picture Archiving and Communication System) at variable combinations of the added filters. As a result, the Dose Area Product (DAP: $mGy{\cdot}cm^2$) and Entrance Surface Doses (ESDs: $mGy$) was lowest at 1 mmAl + 0.2 mmCu and highest at 0 mmAl. The histogram of the image quality by transmitted PACS was not significantly different at variable combinations of exposure parameters on the MATLAB. In conclusion, this study can be helpful for expecting radiation dose-exposure and control exposure parameters for the diagnostic X-ray radiography.

Development of Learning Algorithm using Brain Modeling of Hippocampus for Face Recognition (얼굴인식을 위한 해마의 뇌모델링 학습 알고리즘 개발)

  • Oh, Sun-Moon;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose the face recognition system using HNMA(Hippocampal Neuron Modeling Algorithm) which can remodel the cerebral cortex and hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature-vector of the face images very fast and construct the optimized feature each image. The system is composed of two parts. One is feature-extraction and the other is teaming and recognition. In the feature extraction part, it can construct good-classified features applying PCA(Principal Component Analysis) and LDA(Linear Discriminants Analysis) in order. In the learning part, it cm table the features of the image data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in the dentate gyrus region and remove the noise through the associate memory in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term memory learned by neuron. Experiments confirm the each recognition rate, that are face changes, pose changes and low quality image. The experimental results show that we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to existing methods.

DCT Coefficient Block Size Classification for Image Coding (영상 부호화를 위한 DCT 계수 블럭 크기 분류)

  • Gang, Gyeong-In;Kim, Jeong-Il;Jeong, Geun-Won;Lee, Gwang-Bae;Kim, Hyeon-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.880-894
    • /
    • 1997
  • In this paper,we propose a new algorithm to perform DCT(Discrete Cosine Transform) withn the area reduced by prdeicting position of quantization coefficients to be zero.This proposed algorithm not only decreases the enoding time and the decoding time by reducing computation amount of FDCT(Forward DCT)and IDCT(Inverse DCT) but also increases comprossion ratio by performing each diffirent horizontal- vereical zig-zag scan assording to the calssified block size for each block on the huffiman coeing.Traditional image coding method performs the samd DCT computation and zig-zag scan over all blocks,however this proposed algorthm reduces FDCT computation time by setting to zero insted of computing DCT for quantization codfficients outside classfified block size on the encoding.Also,the algorithm reduces IDCT computation the by performing IDCT for only dequantization coefficients within calssified block size on the decoding.In addition, the algorithm reduces Run-Length by carrying out horizontal-vertical zig-zag scan approriate to the slassified block chraateristics,thus providing the improverment of the compression ratio,On the on ther hand,this proposed algorithm can be applied to 16*16 block processing in which the compression ratio and the image resolution are optimal but the encoding time and the decoding time take long.Also,the algorithm can be extended to motion image coding requirng real time processing.

  • PDF

An Optimal Structure of a Novel Flat Panel Detector to Reduce Scatter Radiation for Clinical Usage: Performance Evaluation with Various Angle of Incident X-ray (산란선 제거를 위한 신개념 간접 평판형 검출기의 임상적용을 위한 최적 구조 : 입사 X선 각도에 따른 성능평가)

  • Yoon, Yongsu
    • Journal of radiological science and technology
    • /
    • v.40 no.4
    • /
    • pp.533-542
    • /
    • 2017
  • In diagnostic radiology, the imaging system has been changed from film/screen to digital system. However, the method for removing scatter radiation such as anti-scatter grid has not kept pace with this change. Therefore, authors have devised the indirect flat panel detector (FPD) system with net-like lead in substrate layer which can remove the scattered radiation. In clinical context, there are many radiographic examinations with angulated incident X-ray. However, our proposed FPD has net-like lead foil so the vertical lead foil to the angulate incident X-ray would have bad effect on its performance. In this study, we identified the effect of vertical/horizontal lead foil component on the novel system's performance and improved the structure of novel system for clinical usage with angulated incident X-ray. Grid exposure factor and image contrast were calculated to investigate various structure of novel system using Monte Carlo simulation software when the incident X-ray was tilted ($0^{\circ}$, $15^{\circ}$, and $30^{\circ}$ from the detector plane). More photons were needed to obtain same image quality in the novel system with vertical lead foil only then the system with horizontal lead foil only. An optimal structure of novel system having different heights of its vertical and horizontal lead foil component showed improved performance compared with the novel system in a previous study. Therefore, the novel system will be useful in a clinical context with the angulated incident X-ray if the height and direction of lead foil in the substrate layer are optimized as the condition of conventional radiography.

Learning-based Super-resolution for Text Images (글자 영상을 위한 학습기반 초고해상도 기법)

  • Heo, Bo-Young;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.175-183
    • /
    • 2015
  • The proposed algorithm consists of two stages: the learning and synthesis stages. At the learning stage, we first collect various high-resolution (HR)-low-resolution (LR) text image pairs, and quantize the LR images, and extract HR-LR block pairs. Based on quantized LR blocks, the LR-HR block pairs are clustered into a pre-determined number of classes. For each class, an optimal 2D-FIR filter is computed, and it is stored into a dictionary with the corresponding LR block for indexing. At the synthesis stage, each quantized LR block in an input LR image is compared with every LR block in the dictionary, and the FIR filter of the best-matched LR block is selected. Finally, a HR block is synthesized with the chosen filter, and a final HR image is produced. Also, in order to cope with noisy environment, we generate multiple dictionaries according to noise level at the learning stage. So, the dictionary corresponding to the noise level of the input image is chosen, and a final HR image is produced using the selected dictionary. Experimental results show that the proposed algorithm outperforms the previous works for noisy images as well as noise-free images.