• Title/Summary/Keyword: MultiTask Learning

Search Result 139, Processing Time 0.025 seconds

Employing TLBO and SCE for optimal prediction of the compressive strength of concrete

  • Zhao, Yinghao;Moayedi, Hossein;Bahiraei, Mehdi;Foong, Loke Kok
    • Smart Structures and Systems
    • /
    • v.26 no.6
    • /
    • pp.753-763
    • /
    • 2020
  • The early prediction of Compressive Strength of Concrete (CSC) is a significant task in the civil engineering construction projects. This study, therefore, is dedicated to introducing two novel hybrids of neural computing, namely Shuffled Complex Evolution (SCE) and Teaching-Learning-Based Optimization (TLBO) for predicting the CSC. The algorithms are applied to a Multi-Layer Perceptron (MLP) network to create the SCE-MLP and TLBO-MLP ensembles. The results revealed that, first, intelligent models can properly handle analyzing and generalizing the non-linear relationship between the CSC and its influential parameters. For example, the smallest and largest values of the CSC were 17.19 and 58.53 MPa, and the outputs of the MLP, SCE-MLP, and TLBO-MLP range in [17.61, 54.36], [17.69, 55.55] and [18.07, 53.83], respectively. Second, applying the SCE and TLBO optimizers resulted in increasing the correlation of the MLP products from 93.58 to 97.32 and 97.22%, respectively. The prediction error was also reduced by around 34 and 31% which indicates the high efficiency of these algorithms. Moreover, regarding the computation time needed to implement the SCE-MLP and TLBO-MLP models, the SCE is a considerably more time-efficient optimizer. Nevertheless, both suggested models can be promising substitutes for laboratory and destructive CSC evaluative models.

Differentiation among stability regimes of alumina-water nanofluids using smart classifiers

  • Daryayehsalameh, Bahador;Ayari, Mohamed Arselene;Tounsi, Abdelouahed;Khandakar, Amith;Vaferi, Behzad
    • Advances in nano research
    • /
    • v.12 no.5
    • /
    • pp.489-499
    • /
    • 2022
  • Nanofluids have recently triggered a substantial scientific interest as cooling media. However, their stability is challenging for successful engagement in industrial applications. Different factors, including temperature, nanoparticles and base fluids characteristics, pH, ultrasonic power and frequency, agitation time, and surfactant type and concentration, determine the nanofluid stability regime. Indeed, it is often too complicated and even impossible to accurately find the conditions resulting in a stabilized nanofluid. Furthermore, there are no empirical, semi-empirical, and even intelligent scenarios for anticipating the stability of nanofluids. Therefore, this study introduces a straightforward and reliable intelligent classifier for discriminating among the stability regimes of alumina-water nanofluids based on the Zeta potential margins. In this regard, various intelligent classifiers (i.e., deep learning and multilayer perceptron neural network, decision tree, GoogleNet, and multi-output least squares support vector regression) have been designed, and their classification accuracy was compared. This comparison approved that the multilayer perceptron neural network (MLPNN) with the SoftMax activation function trained by the Bayesian regularization algorithm is the best classifier for the considered task. This intelligent classifier accurately detects the stability regimes of more than 90% of 345 different nanofluid samples. The overall classification accuracy and misclassification percent of 90.1% and 9.9% have been achieved by this model. This research is the first try toward anticipting the stability of water-alumin nanofluids from some easily measured independent variables.

Data anomaly detection for structural health monitoring using a combination network of GANomaly and CNN

  • Liu, Gaoyang;Niu, Yanbo;Zhao, Weijian;Duan, Yuanfeng;Shu, Jiangpeng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.53-62
    • /
    • 2022
  • The deployment of advanced structural health monitoring (SHM) systems in large-scale civil structures collects large amounts of data. Note that these data may contain multiple types of anomalies (e.g., missing, minor, outlier, etc.) caused by harsh environment, sensor faults, transfer omission and other factors. These anomalies seriously affect the evaluation of structural performance. Therefore, the effective analysis and mining of SHM data is an extremely important task. Inspired by the deep learning paradigm, this study develops a novel generative adversarial network (GAN) and convolutional neural network (CNN)-based data anomaly detection approach for SHM. The framework of the proposed approach includes three modules : (a) A three-channel input is established based on fast Fourier transform (FFT) and Gramian angular field (GAF) method; (b) A GANomaly is introduced and trained to extract features from normal samples alone for class-imbalanced problems; (c) Based on the output of GANomaly, a CNN is employed to distinguish the types of anomalies. In addition, a dataset-oriented method (i.e., multistage sampling) is adopted to obtain the optimal sampling ratios between all different samples. The proposed approach is tested with acceleration data from an SHM system of a long-span bridge. The results show that the proposed approach has a higher accuracy in detecting the multi-pattern anomalies of SHM data.

Jointly Learning of Heavy Rain Removal and Super-Resolution in Single Images

  • Vu, Dac Tung;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.113-117
    • /
    • 2020
  • Images were taken under various weather such as rain, haze, snow often show low visibility, which can dramatically decrease accuracy of some tasks in computer vision: object detection, segmentation. Besides, previous work to enhance image usually downsample the image to receive consistency features but have not yet good upsample algorithm to recover original size. So, in this research, we jointly implement removal streak in heavy rain image and super resolution using a deep network. We put forth a 2-stage network: a multi-model network followed by a refinement network. The first stage using rain formula in the single image and two operation layers (addition, multiplication) removes rain streak and noise to get clean image in low resolution. The second stage uses refinement network to recover damaged background information as well as upsample, and receive high resolution image. Our method improves visual quality image, gains accuracy in human action recognition task in datasets. Extensive experiments show that our network outperforms the state of the art (SoTA) methods.

  • PDF

Multi-task Learning Approach for Deep Neural Networks Using Temporal Relations (시간적 관계정보를 활용한 멀티태스크 심층신경망 모델 학습 기법)

  • Lim, Chae-Gyun;Oh, Kyo-Joong;Choi, Ho-Jin
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.211-214
    • /
    • 2021
  • 다수의 태스크를 처리 가능하면서 일반화된 성능을 제공할 수 있는 모델을 구축하는 자연어 이해 분야의 연구에서는 멀티태스크 학습 기법에 대한 연구가 다양하게 시도되고 있다. 또한, 자연어 문장으로 작성된 문서들에는 대체적으로 시간에 관련된 정보가 포함되어 있을 뿐만 아니라, 문서의 전체 내용과 문맥을 이해하기 위해서 이러한 정보를 정확하게 인식하는 것이 중요하다. NLU 분야의 태스크를 더욱 정확하게 수행하려면 모델 내부적으로 시간정보를 반영할 필요가 있으며, 멀티태스크 학습 과정에서 추가적인 태스크로 시간적 관계정보를 추출하여 활용 가능하다. 본 논문에서는, 한국어 입력문장의 시간적 맥락정보를 활용할 수 있도록 NLU 태스크들의 학습 과정에서 시간관계 추출 태스크를 추가한 멀티태스크 학습 기법을 제안한다. 멀티태스크 학습의 특징을 활용하기 위해서 시간적 관계정보를 추출하는 태스크를 설계하고 기존의 NLU 태스크와 조합하여 학습하도록 모델을 구성한다. 실험에서는 학습 태스크들을 다양하게 조합하여 성능 차이를 분석하며, 기존의 NLU 태스크만 사용했을 경우에 비해 추가된 시간적 관계정보가 어떤 영향을 미치는지 확인한다. 실험결과를 통하여 전반적으로 멀티태스크 조합의 성능이 개별 태스크의 성능보다 높은 경향을 확인하며, 특히 개체명 인식에서 시간관계가 반영될 경우에 크게 성능이 향상되는 결과를 볼 수 있다.

  • PDF

Multi-task learning for entity-centric fact correction on machine summaries (기계 요약의 개체명 사실 수정을 위한 다중 작업 학습 방법 제안)

  • Shin, JeongWan;Noh, Yunseok;Park, SangHeon;O, YoungSun;Park, Seyoung
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.124-130
    • /
    • 2021
  • 기계요약의 사실 불일치는 생성된 요약이 원문과 다른 사실 정보를 전달하는 현상이며, 특히 개체명이 잘못 사용되었을 때 기계요약의 신뢰성을 크게 훼손한다. 개체명의 수정을 위해서는 두 가지 작업을 수행해야한다. 먼저 요약 내 각 개체명이 올바르게 쓰였는지 판별을 해야하며, 이후 잘못된 개체명을 맞게 고치는 작업이 필요하다. 본 논문에서는 두 가지 작업 모두 각 개체명을 문맥적으로 이해함으로써 해결할 수 있다고 가정하고, 이에 따라 두 작업에 대한 다중 작업 학습 방법을 제안한다. 제안한 방법을 통해 학습한 모델은 생성된 기계요약에 대한 후처리 교정을 수행할 수 있다. 제안 모델을 평가하기 위해 강제적으로 개체명을 훼손시킨 요약데이터와 기계 요약 데이터에 대해서 성능을 평가 하였으며, 다른 개체명 수정 모델과 비교하였다. 제안모델은 개체명 수준에서 92.9%의 교정 정확도를 달성했으며, KoBART 요약모델이 만든 기계요약의 사실 정확도 4.88% 포인트 향상시켰다.

  • PDF

Comparative Analysis of Recent Studies on Aspect-Based Sentiment Analysis

  • Faiz Ghifari Haznitrama;Ho-Jin Choi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.647-649
    • /
    • 2023
  • Sentiment analysis as part of natural language processing (NLP) has received much attention following the demand to understand people's opinions. Aspect-based sentiment analysis (ABSA) is a fine-grained subtask from sentiment analysis that aims to classify sentiment at the aspect level. Throughout the years, researchers have formulated ABSA into various tasks for different scenarios. Unlike the early works, the current ABSA utilizes many elements to improve performance and provide more details to produce informative results. These ABSA formulations have provided greater challenges for researchers. However, it is difficult to explore ABSA's works due to the many different formulations, terms, and results. In this paper, we conduct a comparative analysis of recent studies on ABSA. We mention some key elements, problem formulations, and datasets currently utilized by most ABSA communities. Also, we conduct a short review of the latest papers to find the current state-of-the-art model. From our observations, we found that span-level representation is an important feature in solving the ABSA problem, while multi-task learning and generative approach look promising. Finally, we review some open challenges and further directions for ABSA research in the future.

Scheduling of Artificial Intelligence Workloads in Could Environments Using Genetic Algorithms (유전 알고리즘을 이용한 클라우드 환경의 인공지능 워크로드 스케줄링)

  • Seokmin Kwon;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.63-67
    • /
    • 2024
  • Recently, artificial intelligence (AI) workloads encompassing various industries such as smart logistics, FinTech, and entertainment are being executed on the cloud. In this paper, we address the scheduling issues of various AI workloads on a multi-tenant cloud system composed of heterogeneous GPU clusters. Traditional scheduling decreases GPU utilization in such environments, degrading system performance significantly. To resolve these issues, we present a new scheduling approach utilizing genetic algorithm-based optimization techniques, implemented within a process-based event simulation framework. Trace driven simulations with diverse AI workload traces collected from Alibaba's MLaaS cluster demonstrate that the proposed scheduling improves GPU utilization compared to conventional scheduling significantly.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Monitoring of Chemical Processes Using Modified Scale Space Filtering and Functional-Link-Associative Neural Network (개선된 스케일 스페이스 필터링과 함수연결연상 신경망을 이용한 화학공정 감시)

  • Park, Jung-Hwan;Kim, Yoon-Sik;Chang, Tae-Suk;Yoon, En-Sup
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.12
    • /
    • pp.1113-1119
    • /
    • 2000
  • To operate a process plant safely and economically, process monitoring is very important. Process monitoring is the task to identify the state of the system from sensor data. Process monitoring includes data acquisition, regulatory control, data reconciliation, fault detection, etc. This research focuses on the data recon-ciliation using scale-space filtering and fault detection using functional-link associative neural networks. Scale-space filtering is a multi-resolution signal analysis method. Scale-space filtering can extract highest frequency factors(noise) effectively. But scale-space filtering has too large calculation costs and end effect problems. This research reduces the calculation cost of scale-space filtering by applying the minimum limit to the gaussian kernel. And the end-effect that occurs at the end of the signal of the scale-space filtering is overcome by using extrapolation related with the clustering change detection method. Nonlinear principal component analysis methods using neural network have been reviewed and the separately expanded functional-link associative neural network is proposed for chemical process monitoring. The separately expanded functional-link associative neural network has better learning capabilities, generalization abilities and short learning time than the exiting-neural networks. Separately expanded functional-link associative neural network can express a statistical model similar to real process by expanding the input data separately. Combining the proposed methods-modified scale-space filtering and fault detection method using the separately expanded functional-link associative neural network-a process monitoring system is proposed in this research. the usefulness of the proposed method is proven by its application a boiler water supply unit.

  • PDF