• Title/Summary/Keyword: Image extraction

Search Result 2,625, Processing Time 0.033 seconds

Automatic Extraction of Component Inspection Regions from Printed Circuit Board by Image Clustering (영상 클러스터링에 의한 인쇄회로기판의 부품검사영역 자동추출)

  • Kim, Jun-Oh;Park, Tae-Hyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.3
    • /
    • pp.472-478
    • /
    • 2012
  • The inspection machine in PCB (printed circuit board) assembly line checks assembly errors by inspecting the images inside of the component inspection region. The component inspection region consists of region of component package and region of soldering. It is necessary to extract the regions automatically for auto-teaching system of the inspection machine. We propose an image segmentation method to extract the component inspection regions automatically from images of PCB. The acquired image is transformed to HSI color model, and then segmented by several regions by clustering method. We develop a modified K-means algorithm to increase the accuracy of extraction. The heuristics generating the initial clusters and merging the final clusters are newly proposed. The vertical and horizontal projection is also developed to distinguish the region of component package and region of soldering. The experimental results are presented to verify the usefulness of the proposed method.

Medical Image Retrieval Using Feature Extraction Based on Wavelet Transform (웨이블렛 변환 기반의 특징 검출을 이용한 의료영상 검색)

  • Lee, H.S.;Ma, K.Y.;Ahn, Y.B.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.321-322
    • /
    • 1998
  • In this paper, a medical images retrieval method using feature extraction based on wavelet transform is proposed. We used energy of coefficients which is represented by wavelet transform. The proposed retrieval algorithm is comprised of the two retrieval. At first, we make a energy map for wavelet coefficient of a query image and then compare is to one of db image. And then we use an edge information of the query image to retrieve the images selected at the first retrieval once more. Consequently some retrieved images are displayed on screen.

  • PDF

DIGITAL IMAGE HANDLING BY FINITE ELEMENT RETINA FOR PLANT GROWTH MONITORING

  • Murase, Haruhiko;Nishiura, Yoshifumi
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.765-772
    • /
    • 1996
  • Objectives of this study were to develop an application method of a numerical retina using the finite element model and to investigate the performance of image features extraction in comparison to the textural analysis. Using a plant community of radish sprouts, excellent resolution of the finite element retina was revealed. The sensitivity analysis of the finite element retina from engineering point of view was discussed. The importance of sensitivity analysis of the finite element retina was pointed out in terms of extraction of effective image features of plant community . Technical details of maximizing the sensitivity of the finite element retina to populated plant canopy were also discussed.

  • PDF

Computer Vision System for Automatic Grading of Ginseng - Development of Image Processing Algorithms - (인삼선별의 자동화를 위한 컴퓨터 시각장치 - 등급 자동판정을 위한 영상처리 알고리즘 개발 -)

  • 김철수;이중용
    • Journal of Biosystems Engineering
    • /
    • v.22 no.2
    • /
    • pp.227-236
    • /
    • 1997
  • Manual grading and sorting of red-ginsengs are inherently unreliable due to its subjective nature. A computerized technique based on optical and geometrical characteristics was studied for the objective quality evalution. Spectral reflectance of three categories of red-ginsengs - "Chunsam", "Chisam", "Yangsam" - were measured and analyzed. Variation of reflectance among parts of a single ginseng was more significant than variation among the quality categories of ginsengs. A PC-based image processing algorithm was developed to extract geometrical features such as length and thickness of body, length and number of roots, position of head and branch point, etc. The algorithm consisted of image segmentation, calculation of Euclidean distance, skeletonization and feature extraction. Performance of the algorithm was evaluated using sample ginseng images and found to be mostly sussessful.

  • PDF

A Study of Restoration and Feature Extraction (지문영상의 복원과정과 특징점추출에 관한 연구)

  • 한백룡;이대영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.15 no.7
    • /
    • pp.535-544
    • /
    • 1990
  • In this paper, we represent the restoration and feature extraction of fingerprint image. The purpose of restoration of fingerprint image are to com pensate distortion which is affected by noise and to preserve various features of fingerprint image. To extracte the central point of fingerprint, we used sample matrix, and restore fingerprint, we used direction in formation of thinned image and the gray scale of the original images.

  • PDF

Images Automatic Annotation: Multi-cues Integration (영상의 자동 주석: 멀티 큐 통합)

  • Shin, Seong-Yoon;Ahn, Eun-Mi;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.589-590
    • /
    • 2010
  • All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases, and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF

AUTOMATIC BUILDING EXTRACTION BASED ON MULTI-SOURCE DATA FUSION

  • Lu, Yi Hui;Trinder, John
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.248-250
    • /
    • 2003
  • An automatic approach and strategy for extracting building information from aerial images using combined image analysis and interpretation techniques is described in this paper. A dense DSM is obtained by stereo image matching. Multi-band classification, DSM, texture segmentation and Normalised Difference Vegetation Index (NDVI) are used to reveal building interest areas. Then, based on the derived approximate building areas, a shape modelling algorithm based on the level set formulation of curve and surface motion has been used to precisely delineate the building boundaries. Data fusion, based on the Dempster-Shafer technique, is used to interpret simultaneously knowledge from several data sources of the same region, to find the intersection of propositions on extracted information derived from several datasets, together with their associated probabilities. A number of test areas, which include buildings with different sizes, shape and roof colour have been investigated. The tests are encouraging and demonstrate that the system is effective for building extraction, and the determination of more accurate elevations of the terrain surface.

  • PDF

Condition-invariant Place Recognition Using Deep Convolutional Auto-encoder (Deep Convolutional Auto-encoder를 이용한 환경 변화에 강인한 장소 인식)

  • Oh, Junghyun;Lee, Beomhee
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.8-13
    • /
    • 2019
  • Visual place recognition is widely researched area in robotics, as it is one of the elemental requirements for autonomous navigation, simultaneous localization and mapping for mobile robots. However, place recognition in changing environment is a challenging problem since a same place look different according to the time, weather, and seasons. This paper presents a feature extraction method using a deep convolutional auto-encoder to recognize places under severe appearance changes. Given database and query image sequences from different environments, the convolutional auto-encoder is trained to predict the images of the desired environment. The training process is performed by minimizing the loss function between the predicted image and the desired image. After finishing the training process, the encoding part of the structure transforms an input image to a low dimensional latent representation, and it can be used as a condition-invariant feature for recognizing places in changing environment. Experiments were conducted to prove the effective of the proposed method, and the results showed that our method outperformed than existing methods.

Medical Image Classification using Pre-trained Convolutional Neural Networks and Support Vector Machine

  • Ahmed, Ali
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.1-6
    • /
    • 2021
  • Recently, pre-trained convolutional neural network CNNs have been widely used and applied for medical image classification. These models can utilised in three different ways, for feature extraction, to use the architecture of the pre-trained model and to train some layers while freezing others. In this study, the ResNet18 pre-trained CNNs model is used for feature extraction, followed by the support vector machine for multiple classes to classify medical images from multi-classes, which is used as the main classifier. Our proposed classification method was implemented on Kvasir and PH2 medical image datasets. The overall accuracy was 93.38% and 91.67% for Kvasir and PH2 datasets, respectively. The classification results and performance of our proposed method outperformed some of the related similar methods in this area of study.

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF