• Title/Summary/Keyword: Visual Feature Extraction

Search Result 142, Processing Time 0.025 seconds

Feature Extraction Techniques from Micro Drill Bits Images (마이크로 드릴 비트 영상에서의 특징 추출 기법)

  • Oh, Se-Jun;Kim, Nak-Hyun
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.919-920
    • /
    • 2008
  • In this paper, we present early processing techniques for visual inspection of metallic parts. Since metallic surfaces give rise to specular reflections, it is difficult to extract object boundaries using elementary segmentation techniques such as edge detection or binary thresholding. In this paper, we present two techniques for finding object boundaries on micro bit images. First, we explain a technique for detecting blade boundaries using a directional correlation mask. Second, a line and angle extraction technique based on Harris corner detector and Hough transform is described. These techniques have been effective for detecting blade boundaries, and a number of experimental results are presented using real images.

  • PDF

Application of RS and GIS in Extraction of Building Damage Caused by Earthquake

  • Wang, X.Q.;Ding, X.;Dou, A.X.
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1206-1208
    • /
    • 2003
  • The extraction of earthquake damage from remote sensed imagery requires high spatial resolution and temporal effectiveness of acquisition of imagery. The analog photographs and visual interpretation were taken traditionally. Now it is possible to acquire damage information from many commercial high resolution RS satellites. The key techniques are processing velocity and precision. The authors developed the automatic / semiautomatic image process techniques including feature enhancement, and classification, designed the emergency Earthquake Damage and Losses Evaluate System based on Remote Sensing (RSEDLES). The paper introduced the functions of RSEDLES as well as its application to the earthquakes occurred recently.

  • PDF

Improved Feature Extraction of Hand Movement EEG Signals based on Independent Component Analysis and Spatial Filter

  • Nguyen, Thanh Ha;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.4
    • /
    • pp.515-520
    • /
    • 2012
  • In brain computer interface (BCI) system, the most important part is classification of human thoughts in order to translate into commands. The more accuracy result in classification the system gets, the more effective BCI system is. To increase the quality of BCI system, we proposed to reduce noise and artifact from the recording data to analyzing data. We used auditory stimuli instead of visual ones to eliminate the eye movement, unwanted visual activation, gaze control. We applied independent component analysis (ICA) algorithm to purify the sources which constructed the raw signals. One of the most famous spatial filter in BCI context is common spatial patterns (CSP), which maximize one class while minimize the other by using covariance matrix. ICA and CSP also do the filter job, as a raw filter and refinement, which increase the classification result of linear discriminant analysis (LDA).

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Image Feature Point Selection Method Using Nearest Neighbor Distance Ratio Matching (최인접 거리 비율 정합을 이용한 영상 특징점 선택 방법)

  • Lee, Jun-Woo;Jeong, Jea-Hyup;Kang, Jong-Wook;Na, Sang-Il;Jeong, Dong-Seok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.124-130
    • /
    • 2012
  • In this paper, we propose a feature point selection method for MPEG CDVS CE-7 which is processing on International Standard task. Among a large number of extracted feature points, more important feature points which is used in image matching should be selected for the compactness of image descriptor. The proposed method is that remove the feature point in the extraction phase which is filtered by nearest neighbor distance ratio matching in the matching phase. We can avoid the waste of the feature point and employ additional feature points by the proposed method. The experimental results show that our proposed method can obtain true positive rate improvement about 2.3% in pair-wise matching test compared with Test Model.

Study on Structure Visual Inspection Technology using Drones and Image Analysis Techniques (드론과 이미지 분석기법을 활용한 구조물 외관점검 기술 연구)

  • Kim, Jong-Woo;Jung, Young-Woo;Rhim, Hong-Chul
    • Journal of the Korea Institute of Building Construction
    • /
    • v.17 no.6
    • /
    • pp.545-557
    • /
    • 2017
  • The study is about the efficient alternative to concrete surface in the field of visual inspection technology for deteriorated infrastructure. By combining industrial drones and deep learning based image analysis techniques with traditional visual inspection and research, we tried to reduce manpowers, time requirements and costs, and to overcome the height and dome structures. On board device mounted on drones is consisting of a high resolution camera for detecting cracks of more than 0.3 mm, a lidar sensor and a embeded image processor module. It was mounted on an industrial drones, took sample images of damage from the site specimen through automatic flight navigation. In addition, the damege parts of the site specimen was used to measure not only the width and length of cracks but white rust also, and tried up compare them with the final image analysis detected results. Using the image analysis techniques, the damages of 54ea sample images were analyzed by the segmentation - feature extraction - decision making process, and extracted the analysis parameters using supervised mode of the deep learning platform. The image analysis of newly added non-supervised 60ea image samples was performed based on the extracted parameters. The result presented in 90.5 % of the damage detection rate.

MLSE-Net: Multi-level Semantic Enriched Network for Medical Image Segmentation

  • Di Gai;Heng Luo;Jing He;Pengxiang Su;Zheng Huang;Song Zhang;Zhijun Tu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2458-2482
    • /
    • 2023
  • Medical image segmentation techniques based on convolution neural networks indulge in feature extraction triggering redundancy of parameters and unsatisfactory target localization, which outcomes in less accurate segmentation results to assist doctors in diagnosis. In this paper, we propose a multi-level semantic-rich encoding-decoding network, which consists of a Pooling-Conv-Former (PCFormer) module and a Cbam-Dilated-Transformer (CDT) module. In the PCFormer module, it is used to tackle the issue of parameter explosion in the conservative transformer and to compensate for the feature loss in the down-sampling process. In the CDT module, the Cbam attention module is adopted to highlight the feature regions by blending the intersection of attention mechanisms implicitly, and the Dilated convolution-Concat (DCC) module is designed as a parallel concatenation of multiple atrous convolution blocks to display the expanded perceptual field explicitly. In addition, MultiHead Attention-DwConv-Transformer (MDTransformer) module is utilized to evidently distinguish the target region from the background region. Extensive experiments on medical image segmentation from Glas, SIIM-ACR, ISIC and LGG demonstrated that our proposed network outperforms existing advanced methods in terms of both objective evaluation and subjective visual performance.

Effective Nonlinear Filters with Visual Perception Characteristics for Extracting Sketch Features (인간시각 인식특성을 지닌 효율적 비선형 스케치 특징추출 필터)

  • Cho, Sung-Mok;Cho, Ok-Lae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.139-145
    • /
    • 2006
  • Feature extraction technique in digital images has many applications such as robot vision, medical diagnostic system, and motion video transmission, etc. There are several methods for extracting features in digital images for example nonlinear gradient, nonlinear laplacian, and entropy convolutional filter. However, conventional convolutional filters are usually not efficient to extract features in an image because image feature formation in eyes is more sensitive to dark regions than to bright regions. A few nonlinear filters using difference between arithmetic mean and harmonic mean in a window for extracting sketch features are described in this paper They have some advantages, for example simple computation, dependence on local intensities and less sensitive to small intensity changes in very dark regions. Experimental results demonstrate more successful features extraction than other conventional filters over a wide variety of intensity variations.

  • PDF

User-Guidable Abstract Line Drawing of 2D Images (사용자 제어가 용이한 이차원 영상의 추상화된 라인 드로잉 생성)

  • Son, Min-Jung;Lee, Yun-Jin;Kang, Hen-Ry;Lee, Seung-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.2
    • /
    • pp.110-125
    • /
    • 2010
  • We present a novel scheme for generating line drawings from 2D images, aiming to facilitate effective visual communication. In contrast to conventional edge detectors, our technique imitates the human line drawing process to generate lines effectively and intuitively. Our technique consists of three parts: line extraction, line rendering, and user guidance. In line extraction, we extract lines by estimating a likelihood function to effectively find the genuine shape boundaries. In line rendering, we consider the feature scale and the blurriness of lines with which the detail and the focus-level of lines are controlled. We also employ stroke textures to provide a variety of illustration styles. User guidance is allowed to modify the shapes and positions of lines interactively, where immediate response is provided by GPU implementation of most line extraction operations. Experimental results demonstrate that our technique generates various kinds of line drawings from 2D images enabled by the control over detail, focus, and style.