• Title/Summary/Keyword: feature model validation

Search Result 111, Processing Time 0.034 seconds

Rule-based Feature Model Validation Tool (규칙 기반 특성 모델 검증 도구)

  • Choi, Seung-Hoon
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.105-113
    • /
    • 2009
  • The feature models are widely used to model the commonalities and variabilities among the products in the domain engineering phase of software product line developments. The findings and corrections of the errors or consistencies in the feature models are essential to the successful software product line engineering. The aids of the automated tools are needed to perform the validation of the feature models effectively. This paper describes the approach based on JESS rule-base system to validate the feature models and proposes the feature model validation tool using this approach. The tool of this paper validates the feature models in real-time when modeling the feature models. Then it provides the information on existence of errors and the explanations on causes of those errors, which allows the feature modeler to create the error-free feature models. This attempt to validate the feature model using the rule-based system is supposed to be the first time in this research field.

  • PDF

Feature Configuration Validation using Semantic Web Technology (시맨틱 웹 기술을 이용한 특성 구성 검증)

  • Choi, Seung-Hoon
    • Journal of Internet Computing and Services
    • /
    • v.11 no.4
    • /
    • pp.107-117
    • /
    • 2010
  • The feature models representing the common and variable concepts among the software products and the feature configurations generated by selecting the features to be included in the target product are the essential components in the software product lines methodology. Although the researches on the formal semantics and reasoning of the feature models and feature configurations are in progress, the researches on feature model ontologies and feature configuration validation using the semantic web technologies are yet insufficient. This paper defines the formal semantics of the feature models and proposes a feature configuration validation technique based on ontology and semantic web technologies. OWL(Web Ontology Language), a semantic web standard language, is used to represent the knowledge in the feature models and the feature configurations. SWRL(Semantic Web Rule Language), a semantic web rule languages, is used to define the rules to validate the feature configurations. The approach in this paper provides the formal semantic of the feature models, automates the validation of feature configurations, and enables the application of various semantic web technologies, such as SQWRL.

Feature selection in the semivarying coefficient LS-SVR

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.2
    • /
    • pp.461-471
    • /
    • 2017
  • In this paper we propose a feature selection method identifying important features in the semivarying coefficient model. One important issue in semivarying coefficient model is how to estimate the parametric and nonparametric components. Another issue is how to identify important features in the varying and the constant effects. We propose a feature selection method able to address this issue using generalized cross validation functions of the varying coefficient least squares support vector regression (LS-SVR) and the linear LS-SVR. Numerical studies indicate that the proposed method is quite effective in identifying important features in the varying and the constant effects in the semivarying coefficient model.

A Formal Specification and Checking Technique of Feature model using Z language (휘처 모델의 Z 정형 명세와 검사 기법)

  • Song, Chee-Yang;Cho, Eun-Sook;Kim, Chul-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.1
    • /
    • pp.123-136
    • /
    • 2013
  • The Feature model can not be guaranteed the syntactic accuracy of its model and be difficult the validation using automatic tool for its syntax, because this model is expressed by a graphical and informal structure in itself. Therefore, there is a need to formalize and check for the feature model, to precisely define syntax for construct of the model. This paper presents a Z formal specification and a model checking mechanism of the feature model to guarantee the correctness of the model. It first defines the translation rules between feature model and Z, and then converts the syntax of the feature model into the Z schema specification by applying these rules. Finally, the Z schema specification is checked syntax, type, and domain errors using the Z/Eves validation tool to assure the correctness of its specification, With the use of the proposed method, we may express more precisely the construct of the feature model. Moreover the domain analyst are able to usefully verify the errors of the generated feature model.

Traceability Validation of Structured Behavioral Feature-Based Embedded SW Architecture Design Method (Structured Behavioral Feature기반 임베디드 SW 아키텍처 설계 방법의 추적성 검증)

  • Lee, Jung Tae;Jeong, Soyoung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.281-284
    • /
    • 2017
  • 최근 임베디드 시스템 개발이 Model Driven Engineering 방식으로 변화하면서 요구사항과 모델 간의 추적성을 보장하는 것이 매우 중요해졌다. 이 논문에서는 기존의 FDD(Feature Driven Development)와 FOSE(Feature Oriented Software Engineering) 방법론에 적용된 feature 개념을 재정의하여 이를 AUTOSAR platform에 적용하는 방법을 제시하며 요구사항부터 model, code까지 추적성을 검증한다.

  • PDF

Deep Learning Model Validation Method Based on Image Data Feature Coverage (영상 데이터 특징 커버리지 기반 딥러닝 모델 검증 기법)

  • Lim, Chang-Nam;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.375-384
    • /
    • 2021
  • Deep learning techniques have been proven to have high performance in image processing and are applied in various fields. The most widely used methods for validating a deep learning model include a holdout verification method, a k-fold cross verification method, and a bootstrap method. These legacy methods consider the balance of the ratio between classes in the process of dividing the data set, but do not consider the ratio of various features that exist within the same class. If these features are not considered, verification results may be biased toward some features. Therefore, we propose a deep learning model validation method based on data feature coverage for image classification by improving the legacy methods. The proposed technique proposes a data feature coverage that can be measured numerically how much the training data set for training and validation of the deep learning model and the evaluation data set reflects the features of the entire data set. In this method, the data set can be divided by ensuring coverage to include all features of the entire data set, and the evaluation result of the model can be analyzed in units of feature clusters. As a result, by providing feature cluster information for the evaluation result of the trained model, feature information of data that affects the trained model can be provided.

Development of Machine Learning Ensemble Model using Artificial Intelligence (인공지능을 활용한 기계학습 앙상블 모델 개발)

  • Lee, K.W.;Won, Y.J.;Song, Y.B.;Cho, K.S.
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.34 no.5
    • /
    • pp.211-217
    • /
    • 2021
  • To predict mechanical properties of secondary hardening martensitic steels, a machine learning ensemble model was established. Based on ANN(Artificial Neural Network) architecture, some kinds of methods was considered to optimize the model. In particular, interaction features, which can reflect interactions between chemical compositions and processing conditions of real alloy system, was considered by means of feature engineering, and then K-Fold cross validation coupled with bagging ensemble were investigated to reduce R2_score and a factor indicating average learning errors owing to biased experimental database.

Development of an Optimal Convolutional Neural Network Backbone Model for Personalized Rice Consumption Monitoring in Institutional Food Service using Feature Extraction

  • Young Hoon Park;Eun Young Choi
    • The Korean Journal of Food And Nutrition
    • /
    • v.37 no.4
    • /
    • pp.197-210
    • /
    • 2024
  • This study aims to develop a deep learning model to monitor rice serving amounts in institutional foodservice, enhancing personalized nutrition management. The goal is to identify the best convolutional neural network (CNN) for detecting rice quantities on serving trays, addressing balanced dietary intake challenges. Both a vanilla CNN and 12 pre-trained CNNs were tested, using features extracted from images of varying rice quantities on white trays. Configurations included optimizers, image generation, dropout, feature extraction, and fine-tuning, with top-1 validation accuracy as the evaluation metric. The vanilla CNN achieved 60% top-1 validation accuracy, while pre-trained CNNs significantly improved performance, reaching up to 90% accuracy. MobileNetV2, suitable for mobile devices, achieved a minimum 76% accuracy. These results suggest the model can effectively monitor rice servings, with potential for improvement through ongoing data collection and training. This development represents a significant advancement in personalized nutrition management, with high validation accuracy indicating its potential utility in dietary management. Continuous improvement based on expanding datasets promises enhanced precision and reliability, contributing to better health outcomes.

Model based Facial Expression Recognition using New Feature Space (새로운 얼굴 특징공간을 이용한 모델 기반 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.309-316
    • /
    • 2010
  • This paper introduces a new model based method for facial expression recognition that uses facial grid angles as feature space. In order to be able to recognize the six main facial expression, proposed method uses a grid approach and therefore it establishes a new feature space based on the angles that each gird's edge and vertex form. The way taken in the paper is robust against several affine transformations such as translation, rotation, and scaling which in other approaches are considered very harmful in the overall accuracy of a facial expression recognition algorithm. Also, this paper demonstrates the process that the feature space is created using angles and how a selection process of feature subset within this space is applied with Wrapper approach. Selected features are classified by SVM, 3-NN classifier and classification results are validated with two-tier cross validation. Proposed method shows 94% classification result and feature selection algorithm improves results by up to 10% over the full set of feature.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.