• Title/Summary/Keyword: Segmentation and feature extraction

Search Result 194, Processing Time 0.025 seconds

An Intelligent Automatic Early Detection System of Forest Fire Smoke Signatures using Gaussian Mixture Model

  • Yoon, Seok-Hwan;Min, Joonyoung
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.621-632
    • /
    • 2013
  • The most important things for a forest fire detection system are the exact extraction of the smoke from image and being able to clearly distinguish the smoke from those with similar qualities, such as clouds and fog. This research presents an intelligent forest fire detection algorithm via image processing by using the Gaussian Mixture model (GMM), which can be applied to detect smoke at the earliest time possible in a forest. GMMs are usually addressed by making the model adaptive so that its parameters can track changing illuminations and by making the model more complex so that it can represent multimodal backgrounds more accurately for smoke plume segmentation in the forest. Also, in this paper, we suggest a way to classify the smoke plumes via a feature extraction using HSL(Hue, Saturation and Lightness or Luminanace) color space analysis.

A Study on the Feature Region Segmentation for the Analysis of Eye-fundus Images (안저영상(眼低映像) 해석(解析)을 위한 특징영성(特徵領域)의 분할(分割)에 관한 연구(硏究))

  • Kang, Jeon-Kwun;Kim, Seung-Bum;Ku, Ja-Yl;Han, Young-Hwan;Hong, Hong-Seung
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1993 no.11
    • /
    • pp.27-30
    • /
    • 1993
  • Information about retinal blood vessels can be used in grading disease severity or as part of the process of automated diagnosis of diseases with ocular menifestations. In this paper, we address the problem of detecting retinal blood vessels and optic disk (papilla) in Eye-fundus images. We introduce an algorithm for feature extraction based on Fuzzy festering(FCM). The results ore compared to those obtained with other methods. The automatic detection of retinal blood vessels and optic disk in the Eye-fundus images could help physicians in diagnosing ocular diseases.

  • PDF

Algorithm for Extract Region of Interest Using Fast Binary Image Processing (고속 이진화 영상처리를 이용한 관심영역 추출 알고리즘)

  • Cho, Young-bok;Woo, Sung-hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.634-640
    • /
    • 2018
  • In this paper, we propose an automatic extraction algorithm of region of interest(ROI) based on medical x-ray images. The proposed algorithm uses segmentation, feature extraction, and reference image matching to detect lesion sites in the input image. The extracted region is searched for matching lesion images in the reference DB, and the matched results are automatically extracted using the Kalman filter based fitness feedback. The proposed algorithm is extracts the contour of the left hand image for extract growth plate based on the left x-ray input image. It creates a candidate region using multi scale Hessian-matrix based sessionization. As a result, the proposed algorithm was able to split rapidly in 0.02 seconds during the ROI segmentation phase, also when extracting ROI based on segmented image 0.53, the reinforcement phase was able to perform very accurate image segmentation in 0.49 seconds.

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Slab Region Localization for Text Extraction using SIFT Features (문자열 검출을 위한 슬라브 영역 추정)

  • Choi, Jong-Hyun;Choi, Sung-Hoo;Yun, Jong-Pil;Koo, Keun-Hwi;Kim, Sang-Woo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1025-1034
    • /
    • 2009
  • In steel making production line, steel slabs are given a unique identification number. This identification number, Slab management number(SMN), gives information about the use of the slab. Identification of SMN has been done by humans for several years, but this is expensive and not accurate and it has been a heavy burden on the workers. Consequently, to improve efficiency, automatic recognition system is desirable. Generally, a recognition system consists of text localization, text extraction, character segmentation, and character recognition. For exact SMN identification, all the stage of the recognition system must be successful. In particular, the text localization is great important stage and difficult to process. However, because of many text-like patterns in a complex background and high fuzziness between the slab and background, directly extracting text region is difficult to process. If the slab region including SMN can be detected precisely, text localization algorithm will be able to be developed on the more simple method and the processing time of the overall recognition system will be reduced. This paper describes about the slab region localization using SIFT(Scale Invariant Feature Transform) features in the image. First, SIFT algorithm is applied the captured background and slab image, then features of two images are matched by Nearest Neighbor(NN) algorithm. However, correct matching rate can be low when two images are matched. Thus, to remove incorrect match between the features of two images, geometric locations of the matched two feature points are used. Finally, search rectangle method is performed in correct matching features, and then the top boundary and side boundaries of the slab region are determined. For this processes, we can reduce search region for extraction of SMN from the slab image. Most cases, to extract text region, search region is heuristically fixed [1][2]. However, the proposed algorithm is more analytic than other algorithms, because the search region is not fixed and the slab region is searched in the whole image. Experimental results show that the proposed algorithm has a good performance.

Detecting the Prostate Contour in TRUS Image using Support Vector Machine and Rotation-invariant Textures (SVM과 회전 불변 텍스처 특징을 이용한 TRUS 영상의 전립선 윤곽선 검출)

  • Park, Jae Heung;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.15 no.6
    • /
    • pp.675-682
    • /
    • 2014
  • Prostate is only an organ of men. To diagnose the disease of the prostate, generally transrectal ultrasound(TRUS) images are used. Detecting its boundary is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a method for automatic prostate segmentation in TRUS images using Support Vector Machine(SVM) is presented. This method involves preprocessing, extracting Gabor feature, training, and prostate segmentation. The speckle reduction for preprocessing step has been achieved by using stick filter and top-hat transform has been implemented for smoothing. Gabor filter bank for extraction of rotation-invariant texture features has been implemented. SVM for training step has been used to get each feature of prostate and nonprostate. Finally, the boundary of prostate is extracted. A number of experiments are conducted to validate this method and results shows that the proposed algorithm extracted the prostate boundary with less than 10% relative to boundary provided manually by doctors.

Region Decision Using Modified ICM Method (변형된 ICM 방식에 의한 영역판별)

  • Hwang Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.37-44
    • /
    • 2006
  • In this paper, a new version of the ICM method(MICM, modified ICM) in which the contextual information is modelled by Markov random fields (MRF) is introduced. To extract the feature, a new local MRF model with a fitting block neighbourhood is proposed. This model selects contextual information not only from the relative intensity levels but also from the geometrically directional position of neighbouring cliques. Feature extraction depends on each block's contribution to the local variance. They discriminates it into several regions, for example context and background. Boundaries between these regions are also distinctive. The proposed algerian performs segmentation using directional block fitting procedure which confines merging to spatially adjacent elements and generates a partition such that pixels in unified cluster have a homogeneous intensity level. From experiment with ink rubbed copy images(Takbon, 拓本), this method is determined to be quite effective for feature identification. In particular, the new algorithm preserves the details of the images well without over- and under-smoothing problem occurring in general iterated conditional modes (ICM). And also, it may be noted that this method is applicable to the handwriting recognition.

Extraction of Attentive Objects Using Feature Maps (특징 지도를 이용한 중요 객체 추출)

  • Park Ki-Tae;Kim Jong-Hyeok;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.12-21
    • /
    • 2006
  • In this paper, we propose a technique for extracting attentive objects in images using feature maps, regardless of the complexity of images and the position of objects. The proposed method uses feature maps with edge and color information in order to extract attentive objects. We also propose a reference map which is created by integrating feature maps. In order to create a reference map, feature maps which represent visually attentive regions in images are constructed. Three feature maps including edge map, CbCr map and H map are utilized. These maps contain the information about boundary regions by the difference of intensity or colors. Then the combination map which represents the meaningful boundary is created by integrating the reference map and feature maps. Since the combination map simply represents the boundary of objects we extract the candidate object regions including meaningful boundaries from the combination map. In order to extract candidate object regions, we use the convex hull algorithm. By applying a segmentation algorithm to the area of candidate regions to separate object regions and background regions, real object regions are extracted from the candidate object regions. Experiment results show that the proposed method extracts the attentive regions and attentive objects efficiently, with 84.3% Precision rate and 81.3% recall rate.

Deep Learning-based Keypoint Filtering for Remote Sensing Image Registration (원격 탐사 영상 정합을 위한 딥러닝 기반 특징점 필터링)

  • Sung, Jun-Young;Lee, Woo-Ju;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.26-38
    • /
    • 2021
  • In this paper, DLKF (Deep Learning Keypoint Filtering), the deep learning-based keypoint filtering method for the rapidization of the image registration method for remote sensing images is proposed. The complexity of the conventional feature-based image registration method arises during the feature matching step. To reduce this complexity, this paper proposes to filter only the keypoints detected in the artificial structure among the keypoints detected in the keypoint detector by ensuring that the feature matching is matched with the keypoints detected in the artificial structure of the image. For reducing the number of keypoints points as preserving essential keypoints, we preserve keypoints adjacent to the boundaries of the artificial structure, and use reduced images, and crop image patches overlapping to eliminate noise from the patch boundary as a result of the image segmentation method. the proposed method improves the speed and accuracy of registration. To verify the performance of DLKF, the speed and accuracy of the conventional keypoints extraction method were compared using the remote sensing image of KOMPSAT-3 satellite. Based on the SIFT-based registration method, which is commonly used in households, the SURF-based registration method, which improved the speed of the SIFT method, improved the speed by 2.6 times while reducing the number of keypoints by about 18%, but the accuracy decreased from 3.42 to 5.43. Became. However, when the proposed method, DLKF, was used, the number of keypoints was reduced by about 82%, improving the speed by about 20.5 times, while reducing the accuracy to 4.51.

Research on damage detection and assessment of civil engineering structures based on DeepLabV3+ deep learning model

  • Chengyan Song
    • Structural Engineering and Mechanics
    • /
    • v.91 no.5
    • /
    • pp.443-457
    • /
    • 2024
  • At present, the traditional concrete surface inspection methods based on artificial vision have the problems of high cost and insecurity, while the computer vision methods rely on artificial selection features in the case of sensitive environmental changes and difficult promotion. In order to solve these problems, this paper introduces deep learning technology in the field of computer vision to achieve automatic feature extraction of structural damage, with excellent detection speed and strong generalization ability. The main contents of this study are as follows: (1) A method based on DeepLabV3+ convolutional neural network model is proposed for surface detection of post-earthquake structural damage, including surface damage such as concrete cracks, spaling and exposed steel bars. The key semantic information is extracted by different backbone networks, and the data sets containing various surface damage are trained, tested and evaluated. The intersection ratios of 54.4%, 44.2%, and 89.9% in the test set demonstrate the network's capability to accurately identify different types of structural surface damages in pixel-level segmentation, highlighting its effectiveness in varied testing scenarios. (2) A semantic segmentation model based on DeepLabV3+ convolutional neural network is proposed for the detection and evaluation of post-earthquake structural components. Using a dataset that includes building structural components and their damage degrees for training, testing, and evaluation, semantic segmentation detection accuracies were recorded at 98.5% and 56.9%. To provide a comprehensive assessment that considers both false positives and false negatives, the Mean Intersection over Union (Mean IoU) was employed as the primary evaluation metric. This choice ensures that the network's performance in detecting and evaluating pixel-level damage in post-earthquake structural components is evaluated uniformly across all experiments. By incorporating deep learning technology, this study not only offers an innovative solution for accurately identifying post-earthquake damage in civil engineering structures but also contributes significantly to empirical research in automated detection and evaluation within the field of structural health monitoring.