• Title/Summary/Keyword: Network Feature Extraction

Search Result 499, Processing Time 0.023 seconds

A Thoracic Spine Segmentation Technique for Automatic Extraction of VHS and Cobb Angle from X-ray Images (X-ray 영상에서 VHS와 콥 각도 자동 추출을 위한 흉추 분할 기법)

  • Ye-Eun, Lee;Seung-Hwa, Han;Dong-Gyu, Lee;Ho-Joon, Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.51-58
    • /
    • 2023
  • In this paper, we propose an organ segmentation technique for the automatic extraction of medical diagnostic indicators from X-ray images. In order to calculate diagnostic indicators of heart disease and spinal disease such as VHS(vertebral heart scale) and Cobb angle, it is necessary to accurately segment the thoracic spine, carina, and heart in a chest X-ray image. A deep neural network model in which the high-resolution representation of the image for each layer and the structure converted into a low-resolution feature map are connected in parallel was adopted. This structure enables the relative position information in the image to be effectively reflected in the segmentation process. It is shown that learning performance can be improved by combining the OCR module, in which pixel information and object information are mutually interacted in a multi-step process, and the channel attention module, which allows each channel of the network to be reflected as different weight values. In addition, a method of augmenting learning data is presented in order to provide robust performance against changes in the position, shape, and size of the subject in the X-ray image. The effectiveness of the proposed theory was evaluated through an experiment using 145 human chest X-ray images and 118 animal X-ray images.

Proposal of a Convolutional Neural Network Model for the Classification of Cardiomegaly in Chest X-ray Images (흉부 X-선 영상에서 심장비대증 분류를 위한 합성곱 신경망 모델 제안)

  • Kim, Min-Jeong;Kim, Jung-Hun
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.5
    • /
    • pp.613-620
    • /
    • 2021
  • The purpose of this study is to propose a convolutional neural network model that can classify normal and abnormal(cardiomegaly) in chest X-ray images. The training data and test data used in this paper were used by acquiring chest X-ray images of patients diagnosed with normal and abnormal(cardiomegaly). Using the proposed deep learning model, we classified normal and abnormal(cardiomegaly) images and verified the classification performance. When using the proposed model, the classification accuracy of normal and abnormal(cardiomegaly) was 99.88%. Validation of classification performance using normal images as test data showed 95%, 100%, 90%, and 96% in accuracy, precision, recall, and F1 score. Validation of classification performance using abnormal(cardiomegaly) images as test data showed 95%, 92%, 100%, and 96% in accuracy, precision, recall, and F1 score. Our classification results show that the proposed convolutional neural network model shows very good performance in feature extraction and classification of chest X-ray images. The convolutional neural network model proposed in this paper is expected to show useful results for disease classification of chest X-ray images, and further study of CNN models are needed focusing on the features of medical images.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Highly Reliable Fault Detection and Classification Algorithm for Induction Motors (유도전동기를 위한 고 신뢰성 고장 검출 및 분류 알고리즘 연구)

  • Hwang, Chul-Hee;Kang, Myeong-Su;Jung, Yong-Bum;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.18B no.3
    • /
    • pp.147-156
    • /
    • 2011
  • This paper proposes a 3-stage (preprocessing, feature extraction, and classification) fault detection and classification algorithm for induction motors. In the first stage, a low-pass filter is used to remove noise components in the fault signal. In the second stage, a discrete cosine transform (DCT) and a statistical method are used to extract features of the fault signal. Finally, a back propagation neural network (BPNN) method is applied to classify the fault signal. To evaluate the performance of the proposed algorithm, we used one second long normal/abnormal vibration signals of an induction motor sampled at 8kHz. Experimental results showed that the proposed algorithm achieves about 100% accuracy in fault classification, and it provides 50% improved accuracy when compared to the existing fault detection algorithm using a cross-covariance method. In a real-world data acquisition environment, unnecessary noise components are usually included to the real signal. Thus, we conducted an additional simulation to evaluate how well the proposed algorithm classifies the fault signals in a circumstance where a white Gaussian noise is inserted into the fault signals. The simulation results showed that the proposed algorithm achieves over 98% accuracy in fault classification. Moreover, we developed a testbed system including a TI's DSP (digital signal processor) to implement and verify the functionality of the proposed algorithm.

Classification of Handwritten and Machine-printed Korean Address Image based on Connected Component Analysis (연결요소 분석에 기반한 인쇄체 한글 주소와 필기체 한글 주소의 구분)

  • 장승익;정선화;임길택;남윤석
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.904-911
    • /
    • 2003
  • In this paper, we propose an effective method for the distinction between machine-printed and handwritten Korean address images. It is important to know whether an input image is handwritten or machine-printed, because methods for handwritten image are quite different from those of machine-printed image in such applications as address reading, form processing, FAX routing, and so on. Our method consists of three blocks: valid connected components grouping, feature extraction, and classification. Features related to width and position of groups of valid connected components are used for the classification based on a neural network. The experiment done with live Korean address images has demonstrated the superiority of the proposed method. The correct classification rate for 3,147 testing images was about 98.85%.

Prostate Object Extraction in Ultrasound Volume Using Wavelet Transform (초음파 볼륨에서 웨이브렛 변환을 이용한 전립선 객체 추출)

  • Oh Jong-Hwan;Kim Sang-Hyun;Kim Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-77
    • /
    • 2006
  • This thesis proposes an effi챠ent method for extracting a prostate volume from 3D ultrasound image by using wavelet transform and SVM classification. In the proposed method, a modulus image for each 2D slice is generated by averaging detail images of horizontal and vertical orientations at several scales, which has the sharpest local maxima and the lowest noise power compared to those of all single scales. Prostate contour vertices are determined accurately using a SVM classifier, where feature vectors are composed of intensity and texture moments investigated along radial lines. Experimental results show that the proposed method yields absolute mean distance of on average 1.89 pixels when the contours obtained manually by an expert are used as reference data.

Extraction of Classification Boundary for Fuzzy Partitions and Its Application to Pattern Classification (퍼지 분할을 위한 분류 경계의 추출과 패턴 분류에의 응용)

  • Son, Chang-S.;Seo, Suk-T.;Chung, Hwan-M.;Kwon, Soon-H.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.685-691
    • /
    • 2008
  • The selection of classification boundaries in fuzzy rule- based classification systems is an important and difficult problem. So various methods based on learning processes such as neural network, genetic algorithm, and so on have been proposed for it. In a previous study, we pointed out the limitation of the methods and discussed a method for fuzzy partitioning in the overlapped region on feature space in order to overcome the time-consuming when the additional parameters for tuning fuzzy membership functions are necessary. In this paper, we propose a method to determine three types of classification boundaries(i.e., non-overlapping, overlapping, and a boundary point) on the basis of statistical information of the given dataset without learning by extending the method described in the study. Finally, we show the effectiveness of the proposed method through experimental results applied to pattern classification problems using the modified IRIS and standard IRIS datasets.

Design for Automatic Building of a Device Database and Device Identification Algorithm in Power Management System (전력 관리 시스템의 장치 데이터베이스 자동 구축 및 장치 식별 알고리즘 설계)

  • Hong, Sukil;Choi, Kwang-Soon;Hong, Jiman
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.4
    • /
    • pp.403-411
    • /
    • 2014
  • In this paper, an algorithm of extracting the features of home appliances and automatically building a database to identify home appliances is designed and presented. For the verification, a software library supporting this algorithm is implemented and added to an power management system server, which was already implemented to support real-time monitoring of home appliances' power consumption status and controlling their power. The implemented system consists of a system server and clients, each of which measures the power consumed by a home appliance plugged in it and transmits the information to the server in real-time over a wireless network. Through experiments, it is verified that it is possible to identify any home appliance connected to a specific client.

A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

  • Xu, Yi;Chen, Quansheng;Liu, Yan;Sun, Xin;Huang, Qiping;Ouyang, Qin;Zhao, Jiewen
    • Food Science of Animal Resources
    • /
    • v.38 no.2
    • /
    • pp.362-375
    • /
    • 2018
  • This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.

Flower Recognition System Using OpenCV on Android Platform (OpenCV를 이용한 안드로이드 플랫폼 기반 꽃 인식 시스템)

  • Kim, Kangchul;Yu, Cao
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.123-129
    • /
    • 2017
  • New mobile phones with high tech-camera and a large size memory have been recently launched and people upload pictures of beautiful scenes or unknown flowers in SNS. This paper develops a flower recognition system that can get information on flowers in the place where mobile communication is not even available. It consists of a registration part for reference flowers and a recognition part based on OpenCV for Android platform. A new color classification method using RGB color channel and K-means clustering is proposed to reduce the recognition processing time. And ORB for feature extraction and Brute-Force Hamming algorithm for matching are used. We use 12 kinds of flowers with four color groups, and 60 images are applied for reference DB design and 60 images for test. Simulation results show that the success rate is 83.3% and the average recognition time is 2.58 s on Huawei ALEUL00 and the proposed system is suitable for a mobile phone without a network.