• 제목/요약/키워드: Deep Neural Networks (DNNs)

검색결과 34건 처리시간 0.024초

Fast speaker adaptation using extended diagonal linear transformation for deep neural networks

  • Kim, Donghyun;Kim, Sanghun
    • ETRI Journal
    • /
    • 제41권1호
    • /
    • pp.109-116
    • /
    • 2019
  • This paper explores new techniques that are based on a hidden-layer linear transformation for fast speaker adaptation used in deep neural networks (DNNs). Conventional methods using affine transformations are ineffective because they require a relatively large number of parameters to perform. Meanwhile, methods that employ singular-value decomposition (SVD) are utilized because they are effective at reducing adaptive parameters. However, a matrix decomposition is computationally expensive when using online services. We propose the use of an extended diagonal linear transformation method to minimize adaptation parameters without SVD to increase the performance level for tasks that require smaller degrees of adaptation. In Korean large vocabulary continuous speech recognition (LVCSR) tasks, the proposed method shows significant improvements with error-reduction rates of 8.4% and 17.1% in five and 50 conversational sentence adaptations, respectively. Compared with the adaptation methods using SVD, there is an increased recognition performance with fewer parameters.

Faults detection and identification for gas turbine using DNN and LLM

  • Oliaee, Seyyed Mohammad Emad;Teshnehlab, Mohammad;Shoorehdeli, Mahdi Aliyari
    • Smart Structures and Systems
    • /
    • 제23권4호
    • /
    • pp.393-403
    • /
    • 2019
  • Applying more features gives us better accuracy in modeling; however, increasing the inputs causes the curse of dimensions. In this paper, a new structure has been proposed for fault detecting and identifying (FDI) of high-dimensional systems. This structure consist of two structure. The first part includes Auto-Encoders (AE) as Deep Neural Networks (DNNs) to produce feature engineering process and summarize the features. The second part consists of the Local Model Networks (LMNs) with LOcally LInear MOdel Tree (LOLIMOT) algorithm to model outputs (multiple models). The fault detection is based on these multiple models. Hence the residuals generated by comparing the system output and multiple models have been used to alarm the faults. To show the effectiveness of the proposed structure, it is tested on single-shaft industrial gas turbine prototype model. Finally, a brief comparison between the simulated results and several related works is presented and the well performance of the proposed structure has been illustrated.

심층 신경망 기반 딥 드로잉 공정 블랭크 두께 변화율 예측 (Prediction of Blank Thickness Variation in a Deep Drawing Process Using Deep Neural Network)

  • 박근태;박지우;곽민준;강범수
    • 소성∙가공
    • /
    • 제29권2호
    • /
    • pp.89-96
    • /
    • 2020
  • The finite element method has been widely applied in the sheet metal forming process. However, the finite element method is computationally expensive and time consuming. In order to tackle this problem, surrogate modeling methods have been proposed. An artificial neural network (ANN) is one such surrogate model and has been well studied over the past decades. However, when it comes to ANN with two or more layers, so called deep neural networks (DNN), there is distinct a lack of research. We chose to use DNNs our surrogate model to predict the behavior of sheet metal in the deep drawing process. Thickness variation is selected as an output of the DNN in order to evaluate workpiece feasibility. Input variables of the DNN are radius of die, die corner and blank holder force. Finite element analysis was conducted to obtain data for surrogate model construction and testing. Sampling points were determined by full factorial, latin hyper cube and monte carlo methods. We investigated the performance of the DNN according to its structure, number of nodes and number of layers, then it was compared with a radial basis function surrogate model using various sampling methods and numbers. The results show that our DNN could be used as an efficient surrogate model for the deep drawing process.

딥 뉴럴 네트워크 지원을 위한 뉴로모픽 소프트웨어 플랫폼 기술 동향 (Trends in Neuromorphic Software Platform for Deep Neural Network)

  • 유미선;하영목;김태호
    • 전자통신동향분석
    • /
    • 제33권4호
    • /
    • pp.14-22
    • /
    • 2018
  • Deep neural networks (DNNs) are widely used in various domains such as speech and image recognition. DNN software frameworks such as Tensorflow and Caffe contributed to the popularity of DNN because of their easy programming environment. In addition, many companies are developing neuromorphic processing units (NPU) such as Tensor Processing Units (TPUs) and Graphical Processing Units (GPUs) to improve the performance of DNN processing. However, there is a large gap between NPUs and DNN software frameworks due to the lack of framework support for various NPUs. A bridge for the gap is a DNN software platform including DNN optimized compilers and DNN libraries. In this paper, we review the technical trends of DNN software platforms.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • 제2권2호
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • 제9권3호
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

Multi-band Approach to Deep Learning-Based Artificial Stereo Extension

  • Jeon, Kwang Myung;Park, Su Yeon;Chun, Chan Jun;Park, Nam In;Kim, Hong Kook
    • ETRI Journal
    • /
    • 제39권3호
    • /
    • pp.398-405
    • /
    • 2017
  • In this paper, an artificial stereo extension method that creates stereophonic sound from a mono sound source is proposed. The proposed method first trains deep neural networks (DNNs) that model the nonlinear relationship between the dominant and residual signals of the stereo channel. In the training stage, the band-wise log spectral magnitude and unwrapped phase of both the dominant and residual signals are utilized to model the nonlinearities of each sub-band through deep architecture. From that point, stereo extension is conducted by estimating the residual signal that corresponds to the input mono channel signal with the trained DNN model in a sub-band domain. The performance of the proposed method was evaluated using a log spectral distortion (LSD) measure and multiple stimuli with a hidden reference and anchor (MUSHRA) test. The results showed that the proposed method provided a lower LSD and higher MUSHRA score than conventional methods that use hidden Markov models and DNN with full-band processing.

Cycle-accurate NPU 시뮬레이터 및 데이터 접근 방식에 따른 NPU 성능평가 (Cycle-accurate NPU Simulator and Performance Evaluation According to Data Access Strategies)

  • 권구윤;박상우;서태원
    • 대한임베디드공학회논문지
    • /
    • 제17권4호
    • /
    • pp.217-228
    • /
    • 2022
  • Currently, there are increasing demands for applying deep neural networks (DNNs) in the embedded domain such as classification and object detection. The DNN processing in embedded domain often requires custom hardware such as NPU for acceleration due to the constraints in power, performance, and area. Processing DNN models requires a large amount of data, and its seamless transfer to NPU is crucial for performance. In this paper, we developed a cycle-accurate NPU simulator to evaluate diverse NPU microarchitectures. In addition, we propose a novel technique for reducing the number of memory accesses when processing convolutional layers in convolutional neural networks (CNNs) on the NPU. The main idea is to reuse data with memory interleaving, which recycles the overlapping data between previous and current input windows. Data memory interleaving makes it possible to quickly read consecutive data in unaligned locations. We implemented the proposed technique to the cycle-accurate NPU simulator and measured the performance with LeNet-5, VGGNet-16, and ResNet-50. The experiment shows up to 2.08x speedup in processing one convolutional layer, compared to the baseline.

Anomaly Sewing Pattern Detection for AIoT System using Deep Learning and Decision Tree

  • Nguyen Quoc Toan;Seongwon Cho
    • 스마트미디어저널
    • /
    • 제13권2호
    • /
    • pp.85-94
    • /
    • 2024
  • Artificial Intelligence of Things (AIoT), which combines AI and the Internet of Things (IoT), has recently gained popularity. Deep neural networks (DNNs) have achieved great success in many applications. Deploying complex AI models on embedded boards, nevertheless, may be challenging due to computational limitations or intelligent model complexity. This paper focuses on an AIoT-based system for smart sewing automation using edge devices. Our technique included developing a detection model and a decision tree for a sufficient testing scenario. YOLOv5 set the stage for our defective sewing stitches detection model, to detect anomalies and classify the sewing patterns. According to the experimental testing, the proposed approach achieved a perfect score with accuracy and F1score of 1.0, False Positive Rate (FPR), False Negative Rate (FNR) of 0, and a speed of 0.07 seconds with file size 2.43MB.

다중 심층신경망을 이용한 심전도 파라미터의 획득 및 분류 (Acquisition and Classification of ECG Parameters with Multiple Deep Neural Networks)

  • 김지운;박성민;최성욱
    • 대한의용생체공학회:의공학회지
    • /
    • 제43권6호
    • /
    • pp.424-433
    • /
    • 2022
  • As the proportion of non-contact telemedicine increases and the number of electrocardiogram (ECG) data measured using portable ECG monitors increases, the demand for automatic algorithms that can precisely analyze vast amounts of ECG is increasing. Since the P, QRS, and T waves of the ECG have different shapes depending on the location of electrodes or individual characteristics and often have similar frequency components or amplitudes, it is difficult to distinguish P, QRS and T waves and measure each parameter. In order to measure the widths, intervals and areas of P, QRS, and T waves, a new algorithm that recognizes the start and end points of each wave and automatically measures the time differences and amplitudes between each point is required. In this study, the start and end points of the P, QRS, and T waves were measured using six Deep Neural Networks (DNN) that recognize the start and end points of each wave. Then, by synthesizing the results of all DNNs, 12 parameters for ECG characteristics for each heartbeat were obtained. In the ECG waveform of 10 subjects provided by Physionet, 12 parameters were measured for each of 660 heartbeats, and the 12 parameters measured for each heartbeat well represented the characteristics of the ECG, so it was possible to distinguish them from other subjects' parameters. When the ECG data of 10 subjects were combined into one file and analyzed with the suggested algorithm, 10 types of ECG waveform were observed, and two types of ECG waveform were simultaneously observed in 5 subjects, however, it was not observed that one person had more than two types.