• Title/Summary/Keyword: Image Edge

Search Result 2,464, Processing Time 0.032 seconds

The development of product inspection X-ray DR image processing system using intensifying screen (형광지를 이용한 물품검사 X-선 DR 영상처리 시스템 개발)

  • Park, Mun-kyu;Moon, Ha-jung;Lee, Dong-hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1737-1742
    • /
    • 2015
  • In the industrial field for product inspection needs not only on the surface of the product but also the internal components defect inspection. Generally, optical inspection is mainly used for item inspection from production process. However, this is only to check defect of surface it is difficult to perform inspection of goods internal. To overcome these limitations, Instead of optical device by using the portable X- ray DR image acquisition device system developed to obtain an image in real time at the same time and determine product defects. After obtaining the X- ray image, the inspection product within error range is passed after machine image processing. Also, the results and numbers are stored by users.

Adult Image Detection Using an Intensity Filter and an Improved Hough Transform (명암 필터와 개선된 허프 변환을 이용한 성인영상 검출)

  • Jang, Seok-Woo;Kim, Sang-Hee;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.5
    • /
    • pp.45-54
    • /
    • 2009
  • In this paper, we propose an adult images detection algorithm using a mean intensity filter and an improved 2D Hough Transform. This paper is composed of three major steps including a training step, a recognition step, and a verification step. The training step generates a mean nipple variance filter that will be used for detecting nipple candidate regions in the recognition step. To make the mean variance filter, we converts an input color image into a gray scale image and normalize it, and make an average intensity filter for nipple areas. The recognition step first extracts edge images and finds connected components, and decides nipple candidate regions by considering the ratio of width and height of a connected component. It then decides final nipple candidates by calculating the similarity between the learned nipple average intensity filter and the nipple candidate areas. Also, it detects breast lines of an input image through the improved 2D Hough transform. The verification step detects breast areas and identifies adult images by considering the relations between nipple candidate regions and locations of breast lines.

Edge-adaptive demosaicking method for complementary color filter array of digital video cameras (디지털 비디오 카메라용 보색 필터를 위한 에지 적응적 색상 보간 방법)

  • Han, Young-Seok;Kang, Hee;Kang, Moon-Gi
    • Journal of Broadcast Engineering
    • /
    • v.13 no.1
    • /
    • pp.174-184
    • /
    • 2008
  • Complementary color filter array (CCFA) is widely used in consumer-level digital video cameras, since it not only has high sensitivity and good signal-to-noise ratio in low-light condition but also is compatible with the interlaced scanning used in broadcast systems. However, the full-color images obtained from CCFA suffer from the color artifacts such as false color and zipper effects. These artifacts can be removed with edge-adaptive demosaicking (ECD) approaches which are generally used in rrimary color filter array (PCFA). Unfortunately, the unique array pattern of CCFA makes it difficult that CCFA adopts ECD approaches. Therefore, to apply ECD approaches suitable for CCFA to demosaicking is one of the major issues to reconstruct the full-color images. In this paper, we propose a new ECD algorithm for CCFA. To estimate an edge direction precisely and enhance the quality of the reconstructed image, a function of spatial variances is used as a weight, and new color conversion matrices are presented for considering various edge directions. Experimental results indicate that the proposed algorithm outperforms the conventional method with respect to both objective and subjective criteria.

3D Stereoscopic Image Generation of a 2D Medical Image (2D 의료영상의 3차원 입체영상 생성)

  • Kim, Man-Bae;Jang, Seong-Eun;Lee, Woo-Keun;Choi, Chang-Yeol
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.723-730
    • /
    • 2010
  • Recently, diverse 3D image processing technologies have been applied in industries. Among them, stereoscopic conversion is a technology to generate a stereoscopic image from a conventional 2D image. The technology can be applied to movie and broadcasting contents and the viewer can watch 3D stereoscopic contents. Further the stereoscopic conversion is required to be applied to other fields. Following such trend, the aim of this paper is to apply the stereoscopic conversion to medical fields. The medical images can deliver more detailed 3D information with a stereoscopic image compared with a 2D plane image. This paper presents a novel methodology for converting a 2D medical image into a 3D stereoscopic image. For this, mean shift segmentation, edge detection, intensity analysis, etc are utilized to generate a final depth map. From an image and the depth map, left and right images are constructed. In the experiment, the proposed method is performed on a medical image such as CT (Computed Tomograpy). The stereoscopic image displayed on a 3D monitor shows a satisfactory performance.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.

Hybrid Watermarking Technique using DWT Subband Structure and Spatial Edge Information (DWT 부대역구조와 공간 윤곽선정보를 이용한 하이브리드 워터마킹 기술)

  • 서영호;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.706-715
    • /
    • 2004
  • In this paper, to decide the watermark embedding positions and embed the watermark we use the subband tee structure which is presented in the wavelet domain and the edge information in the spatial domain. The significant frequency region is estimated by the subband searching from the higher frequency subband to the lower frequency subband. LH1 subband which has the higher frequency in tree structure of the wavelet domain is divided into 4${\times}$4 submatrices, and the threshold which is used in the watermark embedding is obtained by the blockmatrix which is consists by the average of 4${\times}$4 submatrices. Also the watermark embedding position, Keymap is generated by the blockmatrix for the energy distribution in the frequency domain and the edge information in the spatial domain. The watermark is embedded into the wavelet coefficients using the Keymap and the random sequence generated by LFSR(Linear feedback shift register). Finally after the inverse wavelet transform the watermark embedded image is obtained. the proposed watermarking algorithm showed PSNR over 2㏈ and had the higher results from 2% to 8% in the comparison with the previous research for the attack such as the JPEG compression and the general image processing just like blurring, sharpening and gaussian noise.

Design and Fabrication of 32x32 Foveated CMOS Retina Chip for Edge Detection with Local-Light Adaptation (국소 광적응 기능을 가지는 윤곽검출용 32x32 방사형 CMOS 시각칩의 설계 및 제조)

  • Park, Dae-Sik;Park, Jong-Ho;Kim, Kyung-Moon;Lee, Soo-Kyung;Kim, Hyun-Soo;Kim, Jung-Hwan;Lee, Min-Ho;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.11 no.2
    • /
    • pp.84-92
    • /
    • 2002
  • A $32{\times}32$ pixels foveated (linear-polar) structure retina chip with the function of local-light adaptation for edge detection has been designed and fabricated using CMOS technology. Human retina can detect a wide range of light intensity. In this study, we use the biologically-inspired visual signal processing mechanism that consists of photoreceptors, horizontal cells, and bipolar cells in order to implement the function of edge detection in the retina chip. For a local-light adaptive function, the size of receptive field is changed locally according to the input light intensity. The spatial distribution of sensing pixels in the foveated retina chip has the advantages of selective reduction of image data and good resolution in central part to carry out the elaborate image processing with still enough resolution in the outer parts. The designed chip has been fabricated using standard $0.6\;{\mu}m$ double-poly triple-metal CMOS technology and optimized using HSPICE simulator.

A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods (영상보간법을 이용한 디지털 치근단 방사선영상의 개선에 관한 연구)

  • Song Nam-Kyu;Koh Kawng-Joon
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.28 no.2
    • /
    • pp.387-413
    • /
    • 1998
  • Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR(Signal to Noise Ratio) and MTF(Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value(75.96dB) was obtained with cubic convolution method and the lowest SNR value(72.44dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan(P<0.05). 3. There were significant differences of SNR values between 60kVp and 70kVp in seven interpolation methods. There were significant differences of SNR values between facet model method and those of the other methods at 60kVp(P<0.05), but there were not significant differences of SNR values among seven interpolation methods at 70kVp(P>0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P< 0.05). 5. The speed of computation time was the fastest with nearest -neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method among seven interpolation methods.

  • PDF

Optimization of Image Tracking Algorithm Used in 4D Radiation Therapy (4차원 방사선 치료시 영상 추적기술의 최적화)

  • Park, Jong-In;Shin, Eun-Hyuk;Han, Young-Yih;Park, Hee-Chul;Lee, Jai-Ki;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.8-14
    • /
    • 2012
  • In order to develop a Patient respiratory management system includinga biofeedback function for4-dimentional radiation therapy, this study investigated anoptimal tracking algorithmfor moving target using IR (Infra-red) camera as well as commercial camera. A tracking system was developed by LabVIEW 2010. Motion phantom images were acquired using a camera (IR or commercial). After image process were conducted to convert acquired image to binary image by applying a threshold values, several edge enhance methods such as Sobel, Prewitt, Differentiation, Sigma, Gradient, Roberts, were applied. The targetpattern was defined in the images, and acquired image from a moving targetwas tracked by matching pre-defined tracking pattern. During the matching of imagee, thecoordinateof tracking point was recorded. In order to assess the performance of tracking algorithm, the value of score which represents theaccuracy of pattern matching was defined. To compare the algorithm objectively, we repeat experiments 3 times for 5 minuts for each algorithm. Average valueand standard deviations (SD) of score were automatically calculatedsaved as ASCII format. Score of threshold only was 706, and standard deviation was 84. The value of average and SD for other algorithms which combined edge detection method and thresholdwere 794, 64 in Sobel, 770, 101 in Differentiation, 754, 85 in Gradient, 763, 75 in Prewitt, 777, 93 in Roberts, and 822, 62 in Sigma, respectively. According to score analysis, the most efficient tracking algorithm is the Sigma method. Therefore, 4-dimentional radiation threapy is expected tobemore efficient if threshold and Sigma edge detection method are used together in target tracking.

Single Image Dehazing Based on Depth Map Estimation via Generative Adversarial Networks (생성적 대립쌍 신경망을 이용한 깊이지도 기반 연무제거)

  • Wang, Yao;Jeong, Woojin;Moon, Young Shik
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.43-54
    • /
    • 2018
  • Images taken in haze weather are characteristic of low contrast and poor visibility. The process of reconstructing clear-weather image from a hazy image is called dehazing. The main challenge of image dehazing is to estimate the transmission map or depth map for an input hazy image. In this paper, we propose a single image dehazing method by utilizing the Generative Adversarial Network(GAN) for accurate depth map estimation. The proposed GAN model is trained to learn a nonlinear mapping between the input hazy image and corresponding depth map. With the trained model, first the depth map of the input hazy image is estimated and used to compute the transmission map. Then a guided filter is utilized to preserve the important edge information of the hazy image, thus obtaining a refined transmission map. Finally, the haze-free image is recovered via atmospheric scattering model. Although the proposed GAN model is trained on synthetic indoor images, it can be applied to real hazy images. The experimental results demonstrate that the proposed method achieves superior dehazing results against the state-of-the-art algorithms on both the real hazy images and the synthetic hazy images, in terms of quantitative performance and visual performance.