• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.031 seconds

Improvement of Classification Accuracy of Different Finger Movements Using Surface Electromyography Based on Long Short-Term Memory (LSTM을 이용한 표면 근전도 분석을 통한 서로 다른 손가락 움직임 분류 정확도 향상)

  • Shin, Jaeyoung;Kim, Seong-Uk;Lee, Yun-Sung;Lee, Hyung-Tak;Hwang, Han-Jeong
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.6
    • /
    • pp.242-249
    • /
    • 2019
  • Forearm electromyography (EMG) generated by wrist movements has been widely used to develop an electrical prosthetic hand, but EMG generated by finger movements has been rarely used even though 20% of amputees lose fingers. The goal of this study is to improve the classification performance of different finger movements using a deep learning algorithm, and thereby contributing to the development of a high-performance finger-based prosthetic hand. Ten participants took part in this study, and they performed seven different finger movements forty times each (thumb, index, middle, ring, little, fist and rest) during which EMG was measured from the back of the right hand using four bipolar electrodes. We extracted mean absolute value (MAV), root mean square (RMS), and mean (MEAN) from the measured EMGs for each trial as features, and a 5x5-fold cross-validation was performed to estimate the classification performance of seven different finger movements. A long short-term memory (LSTM) model was used as a classifier, and linear discriminant analysis (LDA) that is a widely used classifier in previous studies was also used for comparison. The best performance of the LSTM model (sensitivity: 91.46 ± 6.72%; specificity: 91.27 ± 4.18%; accuracy: 91.26 ± 4.09%) significantly outperformed that of LDA (sensitivity: 84.55 ± 9.61%; specificity: 84.02 ± 6.00%; accuracy: 84.00 ± 5.87%). Our result demonstrates the feasibility of a deep learning algorithm (LSTM) to improve the performance of classifying different finger movements using EMG.

Deep Learning for Herbal Medicine Image Recognition: Case Study on Four-herb Product

  • Shin, Kyungseop;Lee, Taegyeom;Kim, Jinseong;Jun, Jaesung;Kim, Kyeong-Geun;Kim, Dongyeon;Kim, Dongwoo;Kim, Se Hee;Lee, Eun Jun;Hyun, Okpyung;Leem, Kang-Hyun;Kim, Wonnam
    • Proceedings of the Plant Resources Society of Korea Conference
    • /
    • 2019.10a
    • /
    • pp.87-87
    • /
    • 2019
  • The consumption of herbal medicine and related products (herbal products) have increased in South Korea. At the same time the quality, safety, and efficacy of herbal products is being raised. Currently, the herbal products are standardized and controlled according to the requirements of the Korean Pharmacopoeia, the National Institute of Health and the Ministry of Public Health and Social Affairs. The validation of herbal products and their medicinal component is important, since many of these herbal products are composed of two or more medicinal plants. However, there are no tools to support the validation process. Interest in deep learning has exploded over the past decade, for herbal medicine using algorithms to achieve herb recognition, symptom related target prediction, and drug repositioning have been reported. In this study, individual images of four herbs (Panax ginseng C.A. Meyer, Atractylodes macrocephala Koidz, Poria cocos Wolf, Glycyrrhiza uralensis Fischer), actually sold in the market, were achieved. Certain image preprocessing steps such as noise reduction and resize were formatted. After the features are optimized, we applied GoogLeNet_Inception v4 model for herb image recognition. Experimental results show that our method achieved test accuracy of 95%. However, there are two limitations in the current study. Firstly, due to the relatively small data collection (100 images), the training loss is much lower than validation loss which possess overfitting problem. Secondly, herbal products are mostly in a mixture, the applied method cannot be reliable to detect a single herb from a mixture. Thus, further large data collection and improved object detection is needed for better classification.

  • PDF

Shadow Removal based on the Deep Neural Network Using Self Attention Distillation (자기 주의 증류를 이용한 심층 신경망 기반의 그림자 제거)

  • Kim, Jinhee;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.419-428
    • /
    • 2021
  • Shadow removal plays a key role for the pre-processing of image processing techniques such as object tracking and detection. With the advances of image recognition based on deep convolution neural networks, researches for shadow removal have been actively conducted. In this paper, we propose a novel method for shadow removal, which utilizes self attention distillation to extract semantic features. The proposed method gradually refines results of shadow detection, which are extracted from each layer of the proposed network, via top-down distillation. Specifically, the training procedure can be efficiently performed by learning the contextual information for shadow removal without shadow masks. Experimental results on various datasets show the effectiveness of the proposed method for shadow removal under real world environments.

A Study on the Pipe Position Estimation in GPR Images Using Deep Learning Based Convolutional Neural Network (GPR 영상에서 딥러닝 기반 CNN을 이용한 배관 위치 추정 연구)

  • Chae, Jihun;Ko, Hyoung-yong;Lee, Byoung-gil;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.39-46
    • /
    • 2019
  • In recently years, it has become important to detect underground objects of various marterials including metals, such as detecting the location of sink holes and pipe. For this reason, ground penetrating radar(GPR) technology is attracting attention in the field of underground detection. GPR irradiates the radar wave to find the position of the object buried underground and express the reflected wave from the object as image. However, it is not easy to interpret GPR images because the features reflected from various objects underground are similar to each other in GPR images. Therefore, in order to solve this problem, in this paper, to estimate the piping position in the GRP image according to the threshold value using the CNN (Convolutional Neural Network) model based on deep running, which is widely used in the field of image recognition, As a result of the experiment, it is proved that the pipe position is most reliably detected when the threshold value is 7 or 8.

Video Stabilization Algorithm of Shaking image using Deep Learning (딥러닝을 활용한 흔들림 영상 안정화 알고리즘)

  • Lee, Kyung Min;Lin, Chi Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.145-152
    • /
    • 2019
  • In this paper, we proposed a shaking image stabilization algorithm using deep learning. The proposed algorithm utilizes deep learning, unlike some 2D, 2.5D and 3D based stabilization techniques. The proposed algorithm is an algorithm that extracts and compares features of shaky images through CNN network structure and LSTM network structure, and transforms images in reverse order of movement size and direction of feature points through the difference of feature point between previous frame and current frame. The algorithm for stabilizing the shake is implemented by using CNN network and LSTM structure using Tensorflow for feature extraction and comparison of each frame. Image stabilization is implemented by using OpenCV open source. Experimental results show that the proposed algorithm can be used to stabilize the camera shake stability in the up, down, left, and right shaking images.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Effect of deep-sea mineral water on growth performance, water intake, blood characteristics and serum immunoglobulins in the growing-finishing pigs

  • Lee, Sang-Hee;Park, Choon-Keun
    • Journal of Animal Science and Technology
    • /
    • v.63 no.5
    • /
    • pp.998-1007
    • /
    • 2021
  • Brine mineral water (BMW) is groundwater near the deep sea, and the mineral component of the BMW is different from the water of the deep sea because the components of BMW are derived from the unique geographical features surrounding it. Recently, BMW has attracted attention due to the unique health-related minerals it possesses; however, the influence of BMW on physiological function has not yet been determined in domestic animals. Therefore, this experiment investigated the influence of BMW on the growth performance, water intake, blood properties, and immunoglobulin (Ig) levels of serum in growing-finishing pigs. A total of 64 pig barrows (Landrace × Yorkshire × Duroc) with an average initial weight of 40.56 ± 0.17 kg were used in the experiment, and 0%, 2%, 3%, and 5% samples of BMW diluted with freshwater were provided to experimental animals during the 56 days. We found that the gain/feed ratio in the 3% BMW group was significantly higher than that in the 5% BMW group of growing-finishing pigs (p < 0.05). The water intake was significantly increased in the 5% BMW group compared with the other groups (p < 0.05) of growing-finishing pigs. Additionally, the concentrations of red blood cells (RBCs), hemoglobin (HGB), and hematocrit (HCT) were significantly higher in the 3% BMW group than in the control group. The level of high-density lipoprotein cholesterol was higher in the 3% BMW group than in the 5% BMW group (p < 0.05). Furthermore, IgG and IgM levels in the serum were significantly higher in the 2% and 3% BMW groups than in the control group (p < 0.05). These results suggest that a dilution of 3% BMW in the intake water could improve the levels of RBCs and serum Igs in growing-finishing pigs.

Deep Learning-based Super Resolution Method Using Combination of Channel Attention and Spatial Attention (채널 강조와 공간 강조의 결합을 이용한 딥 러닝 기반의 초해상도 방법)

  • Lee, Dong-Woo;Lee, Sang-Hun;Han, Hyun Ho
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.15-22
    • /
    • 2020
  • In this paper, we proposed a deep learning based super-resolution method that combines Channel Attention and Spatial Attention feature enhancement methods. It is important to restore high-frequency components, such as texture and features, that have large changes in surrounding pixels during super-resolution processing. We proposed a super-resolution method using feature enhancement that combines Channel Attention and Spatial Attention. The existing CNN (Convolutional Neural Network) based super-resolution method has difficulty in deep network learning and lacks emphasis on high frequency components, resulting in blurry contours and distortion. In order to solve the problem, we used an emphasis block that combines Channel Attention and Spatial Attention to which Skip Connection was applied, and a Residual Block. The emphasized feature map extracted by the method was extended through Sub-pixel Convolution to obtain the super resolution. As a result, about PSNR improved by 5%, SSIM improved by 3% compared with the conventional SRCNN, and by comparison with VDSR, about PSNR improved by 2% and SSIM improved by 1%.

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning (딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구)

  • Shin, Seokyong;Lee, SangHun;Han, HyunHo
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.251-258
    • /
    • 2021
  • In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.

Optimization of 1D CNN Model Factors for ECG Signal Classification

  • Lee, Hyun-Ji;Kang, Hyeon-Ah;Lee, Seung-Hyun;Lee, Chang-Hyun;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.29-36
    • /
    • 2021
  • In this paper, we classify ECG signal data for mobile devices using deep learning models. To classify abnormal heartbeats with high accuracy, three factors of the deep learning model are selected, and the classification accuracy is compared according to the changes in the conditions of the factors. We apply a CNN model that can self-extract features of ECG data and compare the performance of a total of 48 combinations by combining conditions of the depth of model, optimization method, and activation functions that compose the model. Deriving the combination of conditions with the highest accuracy, we obtained the highest classification accuracy of 97.88% when we applied 19 convolutional layers, an optimization method SGD, and an activation function Mish. In this experiment, we confirmed the suitability of feature extraction and abnormal beat detection of 1-channel ECG signals using CNN.