• Title/Summary/Keyword: Image extraction

Search Result 2,614, Processing Time 0.026 seconds

Adaptive Extraction Method for Phase Foreground Region in Laser Interferometry of Gear

  • Xian Wang;Yichao Zhao;Chaoyang Ju;Chaoyong Zhang
    • Current Optics and Photonics
    • /
    • v.7 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Tooth surface shape error is an important parameter in gear accuracy evaluation. When tooth surface shape error is measured by laser interferometry, the gear interferogram is highly distorted and the gray level distribution is not uniform. Therefore, it is important for gear interferometry to extract the foreground region from the gear interference fringe image directly and accurately. This paper presents an approach for foreground extraction in gear interference images by leveraging the sinusoidal variation characteristics shown by the interference fringes. A gray level mask with an adaptive threshold is established to capture the relevant features, while a local variance evaluation function is employed to analyze the fluctuation state of the interference image and derive a repair mask. By combining these masks, the foreground region is directly extracted. Comparative evaluations using qualitative and quantitative assessment methods are performed to compare the proposed algorithm with both reference results and traditional approaches. The experimental findings reveal a remarkable degree of matching between the algorithm and the reference results. As a result, this method shows great potential for widespread application in the foreground extraction of gear interference images.

A Study on the Using Drone Images in Cadastral Resurvey (지적재조사 드론 영상 활용방안 연구)

  • Keo Bae Lim;Seoung Hun Bae;Won Hui Lee;Boeun Kim;Yeongju Yu;Jin Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.259-267
    • /
    • 2023
  • At a time when the demand for drones is increasing, a plan to utilize drone images was sought for efficient promotion of cadastral resurvey. To achieve the purpose of this study, the technical and legal status of drone images was reviewed, and through this, the possibility of using it for cadastral resurvey was primarily reviewed. subsequently, an experiment was conducted targeting the project district to examine whether drone images were applied to boundary extraction, which is the primary process of cadastral resurvey. As a result of the experiment, it was found that boundary extraction from images is possible. However, in some cases, it is impossible due to field conditions or image quality. Therefore, it is necessary first to apply cases where boundary extraction is possible to cadastral resurvey and seek solutions for some impossible cases. In particular, the image quality problem may have problems with the current technology, but it will also have problems with the existing drone equipment. So, standard for drone calibration should also be established. Finally, the cadastral resurvey surveying procedure using drones was also presented

An Effective Extraction Algorithm of Pulmonary Regions Using Intensity-level Maps in Chest X-ray Images (흉부 X-ray 영상에서의 명암 레벨지도를 이용한 효과적인 폐 영역 추출 알고리즘)

  • Jang, Geun-Ho;Park, Ho-Hyun;Lee, Seok-Lyong;Kim, Deok-Hwan;Lim, Myung-Kwan
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1062-1075
    • /
    • 2010
  • In the medical image application the difference of intensity is widely used for the image segmentation and feature extraction, and a well known method is the threshold technique that determines a threshold value and generates a binary image based on the threshold. A frequently-used threshold technique is the Otsu algorithm that provides efficient processing and effective selection criterion for choosing the threshold value. However, we cannot get good segmentation results by applying the Otsu algorithm to chest X-ray images. It is because there are various organic structures around lung regions such as ribs and blood vessels, causing unclear distribution of intensity levels. To overcome the ambiguity, we propose in this paper an effective algorithm to extract pulmonary regions that utilizes the Otsu algorithm after removing the background of an X-ray image, constructs intensity-level maps, and uses them for segmenting the X-ray image. To verify the effectiveness of our method, we compared it with the existing 1-dimensional and 2-dimensional Otsu algorithms, and also the results by expert's naked eyes. The experimental result showed that our method achieved the more accurate extraction of pulmonary regions compared to the Otsu methods and showed the similar result as the naked eye's one.

The Etrance Authentication Systems Using Real-Time Object Extraction and the RFID Tag (얼굴 인식과 RFID를 이용한 실시간 객체 추적 및 인증 시스템)

  • Jung, Young Hoon;Lee, Chang Soo;Lee, Kwang Hyung;Jun, Moon Seog
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.4
    • /
    • pp.51-62
    • /
    • 2008
  • In this paper, the proposal system can achieve the more safety of RFID System with the 2-step authentication procedures for the enhancement about the security of general RFID systems. After authentication RFID Tag, additionally, the proposal system extract the characteristic information in the user image for acquisition of the additional authentication information of the user with the camera. In this paper, the system which was proposed more enforce the security of the automatic entrance and exit authentication system with the cognitive characters of RFID Tag and the extracted characteristic information of the user image through the camera. The RFID system which use the active tag and reader with 2.4GHz bandwidth can recognize the tag of RFID in the various output manner. Additionally, when the RFID system have errors, the characteristic information of the user image is designed to replace the RFID system as it compare with the similarity of the color, outline and input image information which was recorded to the database previously. In the result of experiment, the system can acquire more exact results as compared with the single authentication system when it using RFID Tag and the information of color characteristics.

Automated Lineament Extraction and Edge Linking Using Mask Processing and Hough Transform.

  • Choi, Sung-Won;Shin, Jin-Soo;Chi, Kwang-Hoon;So, Chil-Sup
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.411-420
    • /
    • 1999
  • In geology, lineament features have been used to identify geological events, and many of scientists have been developed the algorithm that can be applied with the computer to recognize the lineaments. We choose several edge detection filter, line detection filters and Hough transform to detect an edge, line, and to vectorize the extracted lineament features, respectively. firstly the edge detection filter using a first-order derivative is applied to the original image In this step, rough lineament image is created Secondly, line detection filter is used to refine the previous image for further processing, where the wrong detected lines are, to some extents, excluded by using the variance of the pixel values that is composed of each line Thirdly, the thinning process is carried out to control the thickness of the line. At last, we use the Hough transform to convert the raster image to the vector one. A Landsat image is selected to extract lineament features. The result shows the lineament well regardless of directions. However, the degree of extraction of linear feature depends on the values of parameters and patterns of filters, therefore the development of new filter and the reduction of the number of parameter are required for the further study.

  • PDF

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Classification of Man-Made and Natural Object Images in Color Images

  • Park, Chang-Min;Gu, Kyung-Mo;Kim, Sung-Young;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1657-1664
    • /
    • 2004
  • We propose a method that classifies images into two object types man-made and natural objects. A central object is extracted from each image by using central object extraction method[1] before classification. A central object in an images defined as a set of regions that lies around center of the image and has significant color distribution against its surrounding. We define three measures to classify the object images. The first measure is energy of edge direction histogram. The energy is calculated based on the direction of only non-circular edges. The second measure is an energy difference along directions in Gabor filter dictionary. Maximum and minimum energy along directions in Gabor filter dictionary are selected and the energy difference is computed as the ratio of the maximum to the minimum value. The last one is a shape of an object, which is also represented by Gabor filter dictionary. Gabor filter dictionary for the shape of an object differs from the one for the texture in an object in which the former is computed from a binarized object image. Each measure is combined by using majority rule tin which decisions are made by the majority. A test with 600 images shows a classification accuracy of 86%.

  • PDF

LVQ_Merge Clustering Algorithm for Cell Image Extraction (세포 영상 추출을 위한 LVQ_Merge 군집화 알고리즘)

  • Kwon, Hee Yong;Kim, Min Su;Choi, Kyung Wan;Kwack, Ho Jic;Yu, Suk Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.6
    • /
    • pp.845-852
    • /
    • 2017
  • In this paper, we propose a binarization algorithm using LVQ-Merge clustering method for fast and accurate extraction of cells from cell images. The proposed method clusters pixel data of a given image by using LVQ to remove noise and divides the result into two clusters by applying a hierarchical clustering algorithm to improve the accuracy of binarization. As a result, the execution speed is somewhat slower than that of the conventional LVQ or Otsu algorithm. However, the results of the binarization have very good quality and are almost identical to those judged by the human eye. Especially, the bigger and the more complex the image, the better the binarization quality. This suggests that the proposed method is a useful method for medical image processing field where high-resolution and huge medical images must be processed in real time. In addition, this method is possible to have many clusters instead of two cluster, so it can be used as a method to complement a hierarchical clustering algorithm.

Automatic Disk Disease Recognition based on Feature Vector in T-L Spine Magnetic Resonance Image (척추 자기 공명 영상에서 특징 벡터에 기반 한 디스크 질환의 자동 인식)

  • 홍재성;이성기
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.3
    • /
    • pp.233-242
    • /
    • 1998
  • In anatomical aspects, magnetic resonance image offers more accurate information than other medical images such as X ray ultrasonic and CT images. This paper introduces a method that recognizes disk diseases from spine MR images. In this method, image enhancement, image segmentation and feature extraction for sagittal plane and axial plane images are performed to separate the disk region. And then template matching method is used to extract disease region for axial plane imges. Finally, disease feature vectors are integrated and disease discrimination processes are performed. Experimental results show that the proposed method discriminates between normal and diseased disk with a considerable recognition ratio.

  • PDF

A Method for Identifying Tubercle Bacilli using Neural Networks

  • Lin, Sheng-Fuu;Chen, Hsien-Tse
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.3
    • /
    • pp.191-198
    • /
    • 2009
  • Phlegm smear testing for acid-fast bacilli (AFB) requires careful examination of tubercle bacilli under a microscope to distinguish between positive and negative findings. The biggest weakness of this method is the visual limitations of the examiners. It is also time-consuming, and mistakes may easily occur. This paper proposes a method of identifying tubercle bacilli that uses a computer instead of a human. To address the challenges of AFB testing, this study designs and investigates image systems that can be used to identify tubercle bacilli. The proposed system uses an electronic microscope to capture digital images that are then processed through feature extraction, image segmentation, image recognition, and neural networks to analyze tubercle bacilli. The proposed system can detect the amount of tubercle bacilli and find their locations. This paper analyzes 184 tubercle bacilli images. Fifty images are used to train the artificial neural network, and the rest are used for testing. The proposed system has a 95.6% successful identification rate, and only takes 0.8 seconds to identify an image.