• Title/Summary/Keyword: 컨볼루션 혼합

Search Result 7, Processing Time 0.021 seconds

Blind Noise Separation Method of Convolutive Mixed Signals (컨볼루션 혼합신호의 암묵 잡음분리방법)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.409-416
    • /
    • 2022
  • This paper relates to the blind noise separation method of time-delayed convolutive mixed signals. Since the mixed model of acoustic signals in a closed space is multi-channel, a convolutive blind signal separation method is applied and time-delayed data samples of the two microphone input signals is used. For signal separation, the mixing coefficient is calculated using an inverse model rather than directly calculating the separation coefficient, and the coefficient update is performed by repeated calculations based on secondary statistical properties to estimate the speech signal. Many simulations were performed to verify the performance of the proposed blind signal separation. As a result of the simulation, noise separation using this method operates safely regardless of convolutive mixing, and PESQ is improved by 0.3 points compared to the general adaptive FIR filter structure.

An Algorithm for Computing the Weight Enumerating Function of Concatenated Convolutional Codes (연쇄 컨볼루션 부호의 가중치 열거함수 계산 알고리듬)

  • 강성진;권성락;이영조;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7A
    • /
    • pp.1080-1089
    • /
    • 1999
  • The union upper bounds to the bit error probability of maximum likelihood(ML) soft-decoding of parallel concatenated convolutional codes(PCCC) and serially concatenated convolutional codes(SCCC) can be evaluated through the weight enumerating function(WEF). This union upper bounds become the lower bounds of the BER achievable when iterative decoding is used. In this paper, to compute the WEF, an efficient error event search algorithm which is a combination of stack algorithm and bidirectional search algorithm is proposed. By computor simulation, it is shown that the union boounds obtained by using the proposed algorithm become the lower bounds to BER of concatenated convolutional codes with iterative decoding.

  • PDF

COVID-19 Lung CT Image Recognition (COVID-19 폐 CT 이미지 인식)

  • Su, Jingjie;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.529-536
    • /
    • 2022
  • In the past two years, Severe Acute Respiratory Syndrome Coronavirus-2(SARS-CoV-2) has been hitting more and more to people. This paper proposes a novel U-Net Convolutional Neural Network to classify and segment COVID-19 lung CT images, which contains Sub Coding Block (SCB), Atrous Spatial Pyramid Pooling(ASPP) and Attention Gate(AG). Three different models such as FCN, U-Net and U-Net-SCB are designed to compare the proposed model and the best optimizer and atrous rate are chosen for the proposed model. The simulation results show that the proposed U-Net-MMFE has the best Dice segmentation coefficient of 94.79% for the COVID-19 CT scan digital image dataset compared with other segmentation models when atrous rate is 12 and the optimizer is Adam.

Hybrid Word-Character Neural Network Model for the Improvement of Document Classification (문서 분류의 개선을 위한 단어-문자 혼합 신경망 모델)

  • Hong, Daeyoung;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1290-1295
    • /
    • 2017
  • Document classification, a task of classifying the category of each document based on text, is one of the fundamental areas for natural language processing. Document classification may be used in various fields such as topic classification and sentiment classification. Neural network models for document classification can be divided into two categories: word-level models and character-level models that treat words and characters as basic units respectively. In this study, we propose a neural network model that combines character-level and word-level models to improve performance of document classification. The proposed model extracts the feature vector of each word by combining information obtained from a word embedding matrix and information encoded by a character-level neural network. Based on feature vectors of words, the model classifies documents with a hierarchical structure wherein recurrent neural networks with attention mechanisms are used for both the word and the sentence levels. Experiments on real life datasets demonstrate effectiveness of our proposed model.

BERT & Hierarchical Graph Convolution Neural Network based Emotion Analysis Model (BERT 및 계층 그래프 컨볼루션 신경망 기반 감성분석 모델)

  • Zhang, Junjun;Shin, Jongho;An, Suvin;Park, Taeyoung;Noh, Giseop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.34-36
    • /
    • 2022
  • In the existing text sentiment analysis models, the entire text is usually directly modeled as a whole, and the hierarchical relationship between text contents is less considered. However, in the practice of sentiment analysis, many texts are mixed with multiple emotions. If the semantic modeling of the whole is directly performed, it may increase the difficulty of the sentiment analysis model to judge the sentiment, making the model difficult to apply to the classification of mixed-sentiment sentences. Therefore, this paper proposes a sentiment analysis model BHGCN that considers the text hierarchy. In this model, the output of hidden states of each layer of BERT is used as a node, and a directed connection is made between the upper and lower layers to construct a graph network with a semantic hierarchy. The model not only pays attention to layer-by-layer semantics, but also pays attention to hierarchical relationships. Suitable for handling mixed sentiment classification tasks. The comparative experimental results show that the BHGCN model exhibits obvious competitive advantages.

  • PDF

Automated Classification of Ground-glass Nodules using GGN-Net based on Intensity, Texture, and Shape-Enhanced Images in Chest CT Images (흉부 CT 영상에서 결절의 밝기값, 재질 및 형상 증강 영상 기반의 GGN-Net을 이용한 간유리음영 결절 자동 분류)

  • Byun, So Hyun;Jung, Julip;Hong, Helen;Song, Yong Sub;Kim, Hyungjin;Park, Chang Min
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • In this paper, we propose an automated method for the ground-glass nodule(GGN) classification using GGN-Net based on intensity, texture, and shape-enhanced images in chest CT images. First, we propose the utilization of image that enhances the intensity, texture, and shape information so that the input image includes the presence and size information of the solid component in GGN. Second, we propose GGN-Net which integrates and trains feature maps obtained from various input images through multiple convolution modules on the internal network. To evaluate the classification accuracy of the proposed method, we used 90 pure GGNs, 38 part-solid GGNs less than 5mm with solid component, and 23 part-solid GGNs larger than 5mm with solid component. To evaluate the effect of input image, various input image set is composed and classification results were compared. The results showed that the proposed method using the composition of intensity, texture and shape-enhanced images showed the best result with 82.75% accuracy.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.