• Title/Summary/Keyword: Optimization Learning Model Configuration

Search Result 10, Processing Time 0.035 seconds

Speech Recognition Optimization Learning Model using HMM Feature Extraction In the Bhattacharyya Algorithm (바타차랴 알고리즘에서 HMM 특징 추출을 이용한 음성 인식 최적 학습 모델)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.199-204
    • /
    • 2013
  • Speech recognition system is shall be composed model of learning from the inaccurate input speech. Similar phoneme models to recognize, because it leads to the recognition rate decreases. Therefore, in this paper, we propose a method of speech recognition optimal learning model configuration using the Bhattacharyya algorithm. Based on feature of the phonemes, HMM feature extraction method was used for the phonemes in the training data. Similar learning model was recognized as a model of exact learning using the Bhattacharyya algorithm. Optimal learning model configuration using the Bhattacharyya algorithm. Recognition performance was evaluated. In this paper, the result of applying the proposed system showed a recognition rate of 98.7% in the speech recognition.

Failure estimation of the composite laminates using machine learning techniques

  • Serban, Alexandru
    • Steel and Composite Structures
    • /
    • v.25 no.6
    • /
    • pp.663-670
    • /
    • 2017
  • The problem of layup optimization of the composite laminates involves a very complex multidimensional solution space which is usually non-exhaustively explored using different heuristic computational methods such as genetic algorithms (GA). To ensure the convergence to the global optimum of the applied heuristic during the optimization process it is necessary to evaluate a lot of layup configurations. As a consequence the analysis of an individual layup configuration should be fast enough to maintain the convergence time range to an acceptable level. On the other hand the mechanical behavior analysis of composite laminates for any geometry and boundary condition is very convoluted and is performed by computational expensive numerical tools such as finite element analysis (FEA). In this respect some studies propose very fast FEA models used in layup optimization. However, the lower bound of the execution time of FEA models is determined by the global linear system solving which in some complex applications can be unacceptable. Moreover, in some situation it may be highly preferred to decrease the optimization time with the cost of a small reduction in the analysis accuracy. In this paper we explore some machine learning techniques in order to estimate the failure of a layup configuration. The estimated response can be qualitative (the configuration fails or not) or quantitative (the value of the failure factor). The procedure consists of generating a population of random observations (configurations) spread across solution space and evaluating using a FEA model. The machine learning method is then trained using this population and the trained model is then used to estimate failure in the optimization process. The results obtained are very promising as illustrated with an example where the misclassification rate of the qualitative response is smaller than 2%.

Comparison and optimization of deep learning-based radiosensitivity prediction models using gene expression profiling in National Cancer Institute-60 cancer cell line

  • Kim, Euidam;Chung, Yoonsun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.3027-3033
    • /
    • 2022
  • Background: In this study, various types of deep-learning models for predicting in vitro radiosensitivity from gene-expression profiling were compared. Methods: The clonogenic surviving fractions at 2 Gy from previous publications and microarray gene-expression data from the National Cancer Institute-60 cell lines were used to measure the radiosensitivity. Seven different prediction models including three distinct multi-layered perceptrons (MLP), four different convolutional neural networks (CNN) were compared. Folded cross-validation was applied to train and evaluate model performance. The criteria for correct prediction were absolute error < 0.02 or relative error < 10%. The models were compared in terms of prediction accuracy, training time per epoch, training fluctuations, and required calculation resources. Results: The strength of MLP-based models was their fast initial convergence and short training time per epoch. They represented significantly different prediction accuracy depending on the model configuration. The CNN-based models showed relatively high prediction accuracy, low training fluctuations, and a relatively small increase in the memory requirement as the model deepens. Conclusion: Our findings suggest that a CNN-based model with moderate depth would be appropriate when the prediction accuracy is important, and a shallow MLP-based model can be recommended when either the training resources or time are limited.

An inverse approach based on uniform load surface for damage detection in structures

  • Mirzabeigy, Alborz;Madoliat, Reza
    • Smart Structures and Systems
    • /
    • v.24 no.2
    • /
    • pp.233-242
    • /
    • 2019
  • In this paper, an inverse approach based on uniform load surface (ULS) is presented for structural damage localization and quantification. The ULS is excellent approximation for deformed configuration of a structure under distributed unit force applied on all degrees of freedom. The ULS make use of natural frequencies and mode shapes of structure and in mathematical point of view is a weighted average of mode shapes. An objective function presented to damage detection is discrepancy between the ULS of monitored structure and numerical model of structure. Solving this objective function to find minimum value yields damage's parameters detection. The teaching-learning based optimization algorithm has been employed to solve inverse problem. The efficiency of present damage detection method is demonstrated through three numerical examples. By comparison between proposed objective function and another objective function which make use of natural frequencies and mode shapes, it is revealed present objective function have faster convergence and is more sensitive to damage. The method has good robustness against measurement noise and could detect damage by using the first few mode shapes. The results indicate that the proposed method is reliable technique to damage detection in structures.

The Effect of Hyperparameter Choice on ReLU and SELU Activation Function

  • Kevin, Pratama;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.6 no.4
    • /
    • pp.73-79
    • /
    • 2017
  • The Convolutional Neural Network (CNN) has shown an excellent performance in computer vision task. Applications of CNN include image classification, object detection in images, autonomous driving, etc. This paper will evaluate the performance of CNN model with ReLU and SELU as activation function. The evaluation will be performed on four different choices of hyperparameter which are initialization method, network configuration, optimization technique, and regularization. We did experiment on each choice of hyperparameter and show how it influences the network convergence and test accuracy. In this experiment, we also discover performance improvement when using SELU as activation function over ReLU.

Cyber Threat Intelligence Traffic Through Black Widow Optimisation by Applying RNN-BiLSTM Recognition Model

  • Kanti Singh Sangher;Archana Singh;Hari Mohan Pandey
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.99-109
    • /
    • 2023
  • The darknet is frequently referred to as the hub of illicit online activity. In order to keep track of real-time applications and activities taking place on Darknet, traffic on that network must be analysed. It is without a doubt important to recognise network traffic tied to an unused Internet address in order to spot and investigate malicious online activity. Any observed network traffic is the result of mis-configuration from faked source addresses and another methods that monitor the unused space address because there are no genuine devices or hosts in an unused address block. Digital systems can now detect and identify darknet activity on their own thanks to recent advances in artificial intelligence. In this paper, offer a generalised method for deep learning-based detection and classification of darknet traffic. Furthermore, analyse a cutting-edge complicated dataset that contains a lot of information about darknet traffic. Next, examine various feature selection strategies to choose a best attribute for detecting and classifying darknet traffic. For the purpose of identifying threats using network properties acquired from darknet traffic, devised a hybrid deep learning (DL) approach that combines Recurrent Neural Network (RNN) and Bidirectional LSTM (BiLSTM). This probing technique can tell malicious traffic from legitimate traffic. The results show that the suggested strategy works better than the existing ways by producing the highest level of accuracy for categorising darknet traffic using the Black widow optimization algorithm as a feature selection approach and RNN-BiLSTM as a recognition model.

Predicting concrete's compressive strength through three hybrid swarm intelligent methods

  • Zhang Chengquan;Hamidreza Aghajanirefah;Kseniya I. Zykova;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.32 no.2
    • /
    • pp.149-163
    • /
    • 2023
  • One of the main design parameters traditionally utilized in projects of geotechnical engineering is the uniaxial compressive strength. The present paper employed three artificial intelligence methods, i.e., the stochastic fractal search (SFS), the multi-verse optimization (MVO), and the vortex search algorithm (VSA), in order to determine the compressive strength of concrete (CSC). For the same reason, 1030 concrete specimens were subjected to compressive strength tests. According to the obtained laboratory results, the fly ash, cement, water, slag, coarse aggregates, fine aggregates, and SP were subjected to tests as the input parameters of the model in order to decide the optimum input configuration for the estimation of the compressive strength. The performance was evaluated by employing three criteria, i.e., the root mean square error (RMSE), mean absolute error (MAE), and the determination coefficient (R2). The evaluation of the error criteria and the determination coefficient obtained from the above three techniques indicates that the SFS-MLP technique outperformed the MVO-MLP and VSA-MLP methods. The developed artificial neural network models exhibit higher amounts of errors and lower correlation coefficients in comparison with other models. Nonetheless, the use of the stochastic fractal search algorithm has resulted in considerable enhancement in precision and accuracy of the evaluations conducted through the artificial neural network and has enhanced its performance. According to the results, the utilized SFS-MLP technique showed a better performance in the estimation of the compressive strength of concrete (R2=0.99932 and 0.99942, and RMSE=0.32611 and 0.24922). The novelty of our study is the use of a large dataset composed of 1030 entries and optimization of the learning scheme of the neural prediction model via a data distribution of a 20:80 testing-to-training ratio.

CNN-based Fast Split Mode Decision Algorithm for Versatile Video Coding (VVC) Inter Prediction

  • Yeo, Woon-Ha;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.147-158
    • /
    • 2021
  • Versatile Video Coding (VVC) is the latest video coding standard developed by Joint Video Exploration Team (JVET). In VVC, the quadtree plus multi-type tree (QT+MTT) structure of coding unit (CU) partition is adopted, and its computational complexity is considerably high due to the brute-force search for recursive rate-distortion (RD) optimization. In this paper, we aim to reduce the time complexity of inter-picture prediction mode since the inter prediction accounts for a large portion of the total encoding time. The problem can be defined as classifying the split mode of each CU. To classify the split mode effectively, a novel convolutional neural network (CNN) called multi-level tree (MLT-CNN) architecture is introduced. For boosting classification performance, we utilize additional information including inter-picture information while training the CNN. The overall algorithm including the MLT-CNN inference process is implemented on VVC Test Model (VTM) 11.0. The CUs of size 128×128 can be the inputs of the CNN. The sequences are encoded at the random access (RA) configuration with five QP values {22, 27, 32, 37, 42}. The experimental results show that the proposed algorithm can reduce the computational complexity by 11.53% on average, and 26.14% for the maximum with an average 1.01% of the increase in Bjøntegaard delta bit rate (BDBR). Especially, the proposed method shows higher performance on the sequences of the A and B classes, reducing 9.81%~26.14% of encoding time with 0.95%~3.28% of the BDBR increase.

Performance Optimization Strategies for Fully Utilizing Apache Spark (아파치 스파크 활용 극대화를 위한 성능 최적화 기법)

  • Myung, Rohyoung;Yu, Heonchang;Choi, Sukyong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.9-18
    • /
    • 2018
  • Enhancing performance of big data analytics in distributed environment has been issued because most of the big data related applications such as machine learning techniques and streaming services generally utilize distributed computing frameworks. Thus, optimizing performance of those applications at Spark has been actively researched. Since optimizing performance of the applications at distributed environment is challenging because it not only needs optimizing the applications themselves but also requires tuning of the distributed system configuration parameters. Although prior researches made a huge effort to improve execution performance, most of them only focused on one of three performance optimization aspect: application design, system tuning, hardware utilization. Thus, they couldn't handle an orchestration of those aspects. In this paper, we deeply analyze and model the application processing procedure of the Spark. Through the analyzed results, we propose performance optimization schemes for each step of the procedure: inner stage and outer stage. We also propose appropriate partitioning mechanism by analyzing relationship between partitioning parallelism and performance of the applications. We applied those three performance optimization schemes to WordCount, Pagerank, and Kmeans which are basic big data analytics and found nearly 50% performance improvement when all of those schemes are applied.

Teachers' Recognition on the Optimization of the Educational Contents of Clothing and Textiles in Practical Arts or Technology.Home Economics (실과 및 기술.가정 교과에서 의생활 교육내용의 적정성에 대한 교사의 인식)

  • Baek Seung-Hee;Han Young-Sook;Lee Hye-Ja
    • Journal of Korean Home Economics Education Association
    • /
    • v.18 no.3 s.41
    • /
    • pp.97-117
    • /
    • 2006
  • The purpose of this study was to investigate the teachers' recognition on the optimization of the educational contents of Clothing & Textiles in subjects of :he Practical Arts or the Technology & Home Economics in the course of elementary, middle and high schools. The statistical data for this research were collected from 203 questionnaires of teachers who work on elementary, middle and high schools. Mean. standard deviation, percentage were calculated using SPSS/WIN 12.0 program. Also. these materials were verified by t-test, One-way ANOVA and post verification Duncan. The results were as follows; First, The equipment ratio of practice laboratory were about 24% and very poor in elementary schools but those of middle and high school were 97% and 78% each and higher than elementary schools. Second, More than 50% of teachers recognized the amount of learning 'proper'. The elementary school teachers recognized the mount of learning in 'operating sewing machines' too heavy especially, the same as middle school teachers in 'making shorts': the same as high school teachers in 'making tablecloth and curtain' and 'making pillow cover or bag'. Third, All of the elementary, middle and high school teachers recognized the levels of total contents of clothing and textiles 'common'. The 80% of elementary school teachers recognized 'operating sewing machines' and 'making cushions' difficult especially. The same as middle school teachers in 'hand knitting handbag by crochet hoop needle', 'the various kinds of cloth' and 'making short pants'. The same as high school teachers in 'making tablecloth or curtain'. Fourth, Elementary school teachers recognized 'practicing basic hand needlework' and 'making pouch using hand needlework' important in the degree of educational contents importance. Middle school teachers recognized 'making short pants unimportant. High school teachers considered the contents focusing on practice such as 'making tablecloth and curtain' and 'making pillow cover or bags' unimportant. My suggestions were as follows; Both laboratories and facilities for practice should be established for making clothing and textiles lessons effective in Practical Arts in elementary schools. The 'operating sewing machines' which were considered difficult should be dealt in upper grade, re-conditioning to easier or omitted. The practical contents should be changed to student-activity-oriented and should be recomposed in order to familiar with students' living. It was needed to various and sufficient supports for increasing the teachers' practical abilities.

  • PDF