• Title/Summary/Keyword: Image Blurring

Search Result 310, Processing Time 0.029 seconds

An Image Interpolation Method using an Improved Least Square Estimation (개선된 Least Square Estimation을 이용한 영상 보간 방법)

  • Lee Dong Ho;Na Seung Je
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.10C
    • /
    • pp.1425-1432
    • /
    • 2004
  • Because of the high performance with the edge regions, the existing LSE(Least Square Estimation) method provides much better results than other methods. However, since it emphasizes not oがy edge components but also noise components, some part of interpolated images looks like unnatural. It also requires very high computational complexity and memory for implementation. We propose a new LSE interpolation method which requires much lower complexity and memory, but provides better performance than the existing method. To reduce the computational complexity, we propose and adopt a simple sample window and a direction detector to reduce the size of memory without blurring image. To prevent from emphasizing noise components, the hi-linear interpolation method is added in the LSE formula. The simulation results show that the proposed method provides better subjective and objective performance with love. complexity than the existing method.

Color Transient Improvement Algorithm Based on Image Fusion Technique (영상 융합 기술을 이용한 색 번짐 개선 방법)

  • Chang, Joon-Young;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.50-58
    • /
    • 2008
  • In this paper, we propose a color transient improvement (CTI) algorithm based on image fusion to improve the color transient in the television(TV) receiver or in the MPEG decoder. Video image signals are composed of one luminance and two chrominance components, and the chrominance signals have been more band-limited than the luminance signals since the human eyes usually cannot perceive changes in chrominance over small areas. However, nowadays, as the advanced media like high-definition TV(HDTV) is developed, the blurring of color is perceived visually and affects the image quality. The proposed CTI method improves the transient of chrominance signals by exploiting the high-frequency information of the luminance signal. The high-frequency component extracted from the luminance signal is modified by spatially adaptive weights and added to the input chrominance signals. The spatially adaptive weight is estimated to minimize the ${\iota}_2-norm$ of the error between the original and the estimated chrominance signals in a local window. Experimental results with various test images show that the proposed algorithm produces steep and natural color edge transition and the proposed method outperforms conventional algorithms in terms of both visual and numerical criteria.

A Method for Reconstructing Original Images for Captions Areas in Videos Using Block Matching Algorithm (블록 정합을 이용한 비디오 자막 영역의 원 영상 복원 방법)

  • 전병태;이재연;배영래
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.113-122
    • /
    • 2000
  • It is sometimes necessary to remove the captions and recover original images from video images already broadcast, When the number of images requiring such recovery is small, manual processing is possible, but as the number grows it would be very difficult to do it manually. Therefore, a method for recovering original image for the caption areas in needed. Traditional research on image restoration has focused on restoring blurred images to sharp images using frequency filtering or video coding for transferring video images. This paper proposes a method for automatically recovering original image using BMA(Block Matching Algorithm). We extract information on caption regions and scene change that is used as a prior-knowledge for recovering original image. From the result of caption information detection, we know the start and end frames of captions in video and the character areas in the caption regions. The direction for the recovery is decided using information on the scene change and caption region(the start and end frame for captions). According to the direction, we recover the original image by performing block matching for character components in extracted caption region. Experimental results show that the case of stationary images with little camera or object motion is well recovered. We see that the case of images with motion in complex background is also recovered.

  • PDF

A Study about Learning Graph Representation on Farmhouse Apple Quality Images with Graph Transformer (그래프 트랜스포머 기반 농가 사과 품질 이미지의 그래프 표현 학습 연구)

  • Ji Hun Bae;Ju Hwan Lee;Gwang Hyun Yu;Gyeong Ju Kwon;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Recently, a convolutional neural network (CNN) based system is being developed to overcome the limitations of human resources in the apple quality classification of farmhouse. However, since convolutional neural networks receive only images of the same size, preprocessing such as sampling may be required, and in the case of oversampling, information loss of the original image such as image quality degradation and blurring occurs. In this paper, in order to minimize the above problem, to generate a image patch based graph of an original image and propose a random walk-based positional encoding method to apply the graph transformer model. The above method continuously learns the position embedding information of patches which don't have a positional information based on the random walk algorithm, and finds the optimal graph structure by aggregating useful node information through the self-attention technique of graph transformer model. Therefore, it is robust and shows good performance even in a new graph structure of random node order and an arbitrary graph structure according to the location of an object in an image. As a result, when experimented with 5 apple quality datasets, the learning accuracy was higher than other GNN models by a minimum of 1.3% to a maximum of 4.7%, and the number of parameters was 3.59M, which was about 15% less than the 23.52M of the ResNet18 model. Therefore, it shows fast reasoning speed according to the reduction of the amount of computation and proves the effect.

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.

Correction for SPECT image distortion by non-circular detection orbits (비원형 궤도에서의 검출에 의한 SPECT 영상 왜곡 보정)

  • Lee, Nam-Yong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.3
    • /
    • pp.156-162
    • /
    • 2007
  • The parallel beam SPECT system acquires projection data by using collimators in conjunction with photon detectors. The projection data of the parallel beam SPECT system is, however, blurred by the point response function of the collimator that is used to define the range of directions where photons can be detected. By increasing the number of parallel holes per unit area in collimator, one can reduce such blurring effect. This approach also, however, has the blurring problem if the distance between the object and the collimator becomes large. In this paper we consider correction methods for artifacts caused by non-circular orbit of parallel beam SPECT with many parallel holes per detector cell. To do so, we model the relationship between the object and its projection data as a linear system, and propose an iterative reconstruction method including artifacts correction. We compute the projector and the backprojector, which are required in iterative method, as a sum of convolutions with distance-dependent point response functions instead of matrix form, where those functions are analytically computed from a single function. By doing so, we dramatically reduce the computation time and memory required for the generation of the projector and the backprojector. We conducted several simulation studies to compare the performance of the proposed method with that of conventional Fourier method. The result shows that the proposed method outperforms Fourier methods objectively and subjectively.

  • PDF

Usefulness in Evaluation of NM Image which It Follows in Onco. Flash Processing Application (Onco. Flash Processing 적용에 따른 핵의학 영상의 유용성 평가)

  • Kim, Jung-Soo;Kim, Byung-Jin;Kim, Jin-Eui;Woo, Jae-Ryong;Kim, Hyun-Joo;Shin, Heui-Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.1
    • /
    • pp.13-18
    • /
    • 2008
  • Purpose: The image processing method due to the algorism which is various portion nuclear medical image decision is important it makes holds. The purpose of this study is it applies hereupon new image processing method SIEMENS (made by Pixon co.) Onco. flash processing reconstruction and the comparison which use the image control technique of existing the clinical usefulness it analyzes with it evaluates. Materials & Methods: 1. Whole body bone scan-scan speed 20 cm/min, 30 cm/min & 40 cm/min blinding test 2. Bone static spot scan-regional view 200 kcts, 400 kcts for chest, pelvis, foot blinding test 3. 4 quadrant-bar phantom-20000 kcts visual evaluation 4. LSF-FWHM resolution comparison ananysis. Results: 1. Raw data (20 cm/min) & processing data (30 cm/min)-similar level image quality 2. Low count static image-image quality clearly improved at visual evaluation result. 3. Visual evaluation by quadrant bar phantom-rising image quality level 4. Resolution comparison evaluation (FWHM)-same difference from resolution comparison evaluation Conclusion: The study which applies a new method Onco. flash processing reconstruction, it will be able to confirm the image quality improvement which until high level is clearer the case which applies the method of existing better than. The new reconstruction improves the resolution & reduces the noise. This enhances the diagnostic capabilities of such imagery for radiologists and physicians and allows a reduction in radiation dosage for the same image quality. Like this fact, rising of equipment availability & shortening the patient waiting move & from viewpoint of the active defense against radiation currently becomes feed with the fact that it will be the useful result propriety which is sufficient in clinical NM.

  • PDF

The Study about Application of LEAP Collimator at Brain Diamox Perfusion Tomography Applied Flash 3D Reconstruction: One Day Subtraction Method (Flash 3D 재구성을 적용한 뇌 혈류 부하 단층 촬영 시 LEAP 검출기의 적용에 관한 연구: One Day Subtraction Method)

  • Choi, Jong-Sook;Jung, Woo-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.102-109
    • /
    • 2009
  • Purpose: Flash 3D (pixon(R) method; 3D OSEM) was developed as a software program to shorten exam time and improve image quality through reconstruction, it is an image processing method that usefully be applied to nuclear medicine tomography. If perfoming brain diamox perfusion scan by reconstructing subtracted images by Flash 3D with shortened image acquisition time, there was a problem that SNR of subtracted image is lower than basal image. To increase SNR of subtracted image, we use LEAP collimators, and we emphasized on sensitivity of vessel dilatation than resolution of brain vessel. In this study, our purpose is to confirm possibility of application of LEAP collimators at brain diamox perfusion tomography, identify proper reconstruction factors by using Flash 3D. Materials and methods: (1) The evaluation of phantom: We used Hoffman 3D Brain Phantom with $^{99m}Tc$. We obtained images by LEAP and LEHR collimators (diamox image) and after 6 hours (the half life of $^{99m}Tc$: 6 hours), we use obtained second image (basal image) by same method. Also, we acquired SNR and ratio of white matters/gray matters of each basal image and subtracted image. (2) The evaluation of patient's image: We quantitatively analyzed patients who were examined by LEAP collimators then was classified as a normal group and who were examined by LEHR collimators then was classified as a normal group from 2008. 05 to 2009. 01. We evaluate the results from phantom by substituting factors. We used one-day protocol and injected $^{99m}Tc$-ECD 925 MBq at both basal image acquisition and diamox image acquisition. Results: (1) The evaluation of phantom: After measuring counts from each detector, at basal image 41~46 kcount, stress image 79~90 kcount, subtraction image 40~47 kcount were detected. LEAP was about 102~113 kcount at basal image, 188~210 kcount at stress image and 94~103 at subtraction image kcount were detected. The SNR of LEHR subtraction image was decreased than LEHR basal image about 37%, the SNR of LEAP subtraction image was decreased than LEAP basal image about 17%. The ratio of gray matter versus white matter is 2.2:1 at LEHR basal image and 1.9:1 at subtraction, and at LEAP basal image was 2.4:1 and subtraction image was 2:1. (2) The evaluation of patient's image: the counts acquired by LEHR collimators are about 40~60 kcounts at basal image, and 80~100 kcount at stress image. It was proper to set FWHM as 7 mm at basal and stress image and 11mm at subtraction image. LEAP was about 80~100 kcount at basal image and 180~200 kcount at stress image. LEAP images could reduce blurring by setting FWHM as 5 mm at basal and stress images and 7 mm at subtraction image. At basal and stress image, LEHR image was superior than LEAP image. But in case of subtraction image like a phantom experiment, it showed rough image because SNR of LEHR image was decreased. On the other hand, in case of subtraction LEAP image was better than LEHR image in SNR and sensitivity. In all LEHR and LEAP collimator images, proper subset and iteration frequency was 8 times. Conclusions: We could archive more clear and high SNR subtraction image by using proper filter with LEAP collimator. In case of applying one day protocol and reconstructing by Flash 3D, we could consider application of LEAP collimator to acquire better subtraction image.

  • PDF

No-reference objective quality assessment of image using blur and blocking metric (블러링과 블록킹 수치를 이용한 영상의 무기준법 객관적 화질 평가)

  • Jeong, Tae-Uk;Kim, Young-Hie;Lee, Chul-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.96-104
    • /
    • 2009
  • In this paper, we propose a no-reference objective Quality assessment metrics of image. The blockiness and blurring of edge areas which are sensitive to the human visual system are modeled as step functions. Blocking and blur metrics are obtained by estimating local visibility of blockiness and edge width, For the blocking metric, horizontal and vertical blocking lines are first determined by accumulating weighted differences of adjacent pixels and then the local visibility of blockiness at the intersection of blocking lines is obtained from the total difference of amplitudes of the 2-D step function which is modelled as a blocking region. The blurred input image is first re-blurred by a Gaussian blur kernel and an edge mask image is generated. In edge blocks, the local edge width is calculated from four directional projections (horizontal, vertical and two diagonal directions) using local extrema positions. In addition, the kurtosis and SSIM are used to compute the blur metric. The final no-reference objective metric is computed after those values are combined using an appropriate function. Experimental results show that the proposed objective metrics are highly correlated to the subjective data.

A Study on the Pixel-Paralled Image Processing System for Image Smoothing (영상 평활화를 위한 화소-병렬 영상처리 시스템에 관한 연구)

  • Kim, Hyun-Gi;Yi, Cheon-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.11
    • /
    • pp.24-32
    • /
    • 2002
  • In this paper we implemented various image processing filtering using the format converter. This design method is based on realized the large processor-per-pixel array by integrated circuit technology. These two types of integrated structure are can be classify associative parallel processor and parallel process DRAM(or SRAM) cell. Layout pitch of one-bit-wide logic is identical memory cell pitch to array high density PEs in integrate structure. This format converter design has control path implementation efficiently, and can be utilize the high technology without complicated controller hardware. Sequence of array instruction are generated by host computer before process start, and instructions are saved on unit controller. Host computer is executed the pixel-parallel operation starting at saved instructions after processing start. As a result, we obtained three result that 1)simple smoothing suppresses higher spatial frequencies, reducing noise but also blurring edges, 2) a smoothing and segmentation process reduces noise while preserving sharp edges, and 3) median filtering, like smoothing and segmentation, may be applied to reduce image noise. Median filtering eliminates spikes while maintaining sharp edges and preserving monotonic variations in pixel values.