• 제목/요약/키워드: inference model

검색결과 1,158건 처리시간 0.027초

Development of a Rule-Based Inference Model for Human Sensibility Engineering System

  • Yang Sun-Mo;Ahn Beumjun;Seo Kwang-Kyu
    • Journal of Mechanical Science and Technology
    • /
    • 제19권3호
    • /
    • pp.743-755
    • /
    • 2005
  • Human Sensibility Engineering System (HSES) has been applied to product development for customer's satisfaction based on ergonomic technology. The system is composed of three parts such as human sensibility analysis, inference mechanism, and presentation technologies. Inference mechanism translating human sensibility into design elements plays an important role in the HSES. In this paper, we propose a rule-based inference model for HSES. The rule-based inference model is composed of five rules and two inference approaches. Each of these rules reasons the design elements for selected human sensibility words with the decision variables from regression analysis in terms of forward inference. These results are evaluated by means of backward inference. By comparing the evaluation results, the inference model decides on product design elements which are closer to the customer's feeling and emotion. Finally, simulation results are tested statistically in order to ascertain the validity of the model.

다중 퍼지 추론 모델에 의한 비선형 시스템의 최적 동정 (The optimal identification of nonlinear systems by means of Multi-Fuzzy Inference model)

  • 정회열;오성권
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2001년도 하계학술대회 논문집 D
    • /
    • pp.2669-2671
    • /
    • 2001
  • In this paper, we propose design a Multi-Fuzzy Inference model structure. In order to determine structure of the proposed Multi-Fuzzy Inference model, HCM clustering method is used. The parameters of membership function of the Multi-Fuzzy are identified by genetic algorithms. A aggregate performance index with a weighting factor is used to achieve a sound balance between approximation and generalization abilities of the model. We use simplified inference and linear inference as inference method of the proposed Multi-Fuzzy model and the standard least square method for estimating consequence parameters of the Multi-Fuzzy. Finally, we use some of numerical data to evaluate the proposed Multi-Fuzzy model and discuss about the usefulness.

  • PDF

Fuzzy Petri Nets를 이용한 퍼지 추론 시스템의 모델링 및 추론기관의 구현 (A Model with an Inference Engine for a Fuzzy Production System Using Fuzzy Petri Nets)

  • 전명근
    • 전자공학회논문지B
    • /
    • 제29B권7호
    • /
    • pp.30-41
    • /
    • 1992
  • As a general model of rule-based systems, we propose a model for a fuzzy production system having chaining rules and an inference engine associated with the model. The concept of so-called 'fuzzy petri nets' is used to model the fuzzy production system and the inference engine is designed to be capable of handling inexact knowledge. The fuzzy logic is adopted to represent vagueness in the rules and the certainty factor is used to express uncertainty of each rules given by a human expert. Parallel, inference schemes are devised by transforming Fuzzy Petri nets to matrix formula. Futher, the inference engine mechanism under the Mamdani's implication method can be desceribed by a simple algebraic formula, which makes real time inference possible.

  • PDF

유전자 알고리즘과 하중값을 이용한 퍼지 시스템의 최적화 (Optimization of Fuzzy Systems by Means of GA and Weighting Factor)

  • 박병준;오성권;안태천;김현기
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제48권6호
    • /
    • pp.789-799
    • /
    • 1999
  • In this paper, the optimization of fuzzy inference systems is proposed for fuzzy model of nonlinear systems. A fuzzy model needs to be identified and optimized by means of the definite and systematic methods, because a fuzzy model is primarily acquired by expert's experience. The proposed rule-based fuzzy model implements system structure and parameter identification using the HCM(Hard C-mean) clustering method, genetic algorithms and fuzzy inference method. Two types of inference methods of a fuzzy model are the simplified inference and linear inference. in this paper, nonlinear systems are expressed using the identification of structure such as input variables and the division of fuzzy input subspaces, and the identification of parameters of a fuzzy model. To identify premise parameters of fuzzy model, the genetic algorithms is used and the standard least square method with the gaussian elimination method is utilized for the identification of optimum consequence parameters of fuzzy model. Also, the performance index with weighting factor is proposed to achieve a balance between the performance results of fuzzy model produced for the training and testing data set, and it leads to enhance approximation and predictive performance of fuzzy system. Time series data for gas furnace and sewage treatment process are used to evaluate the performance of the proposed model.

  • PDF

Reject Inference of Incomplete Data Using a Normal Mixture Model

  • Song, Ju-Won
    • 응용통계연구
    • /
    • 제24권2호
    • /
    • pp.425-433
    • /
    • 2011
  • Reject inference in credit scoring is a statistical approach to adjust for nonrandom sample bias due to rejected applicants. Function estimation approaches are based on the assumption that rejected applicants are not necessary to be included in the estimation, when the missing data mechanism is missing at random. On the other hand, the density estimation approach by using mixture models indicates that reject inference should include rejected applicants in the model. When mixture models are chosen for reject inference, it is often assumed that data follow a normal distribution. If data include missing values, an application of the normal mixture model to fully observed cases may cause another sample bias due to missing values. We extend reject inference by a multivariate normal mixture model to handle incomplete characteristic variables. A simulation study shows that inclusion of incomplete characteristic variables outperforms the function estimation approaches.

모형의 복잡성, 구조 및 목적함수가 모형 검정에 미치는 영향 (Effects of Model Complexity, Structure and Objective Function on Calibration Process)

  • Choi, Kyung Sook
    • 한국농공학회지
    • /
    • 제45권4호
    • /
    • pp.89-97
    • /
    • 2003
  • Using inference models developed for estimation of the parameters necessary to implement the Runoff Block of the Stormwater Management Model (SWMM), a number of alternative inference scenarios were developed to assess the influence of inference model complexity and structure on the calibration of the catchment modelling system. These inference models varied from the assumption of a spatially invariant value (catchment average) to spatially variable with each subcatchment having its own unique values. Fur-thermore, the influence of different measures of deviation between the recorded information and simulation predictions were considered. The results of these investigations indicate that the model performance is more influenced by model structure than complexity, and control parameter values are very much dependent on objective function selected as this factor was the most influential for both the initial estimates and the final results.

퍼지 추론 방법을 이용한 퍼지 동정과 유전자 알고리즘에 의한 이의 최적화 (Fuzzy Identification by means of Fuzzy Inference Method and its Optimization by GA)

  • 박병준;박춘성;안태천;오성권
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 하계학술대회 논문집 B
    • /
    • pp.563-565
    • /
    • 1998
  • In this paper, we are proposed optimization method of fuzzy model in order to complex and nonlinear system. In the fuzzy modeling, a premise identification is very important to describe the charateristics of a given unknown system. Then, the proposed fuzzy model implements system structure and parameter identification, using the fuzzy inference method and genetic algorithms. Inference method for fuzzy model presented in our paper include the simplified inference and linear inference. Time series data for gas furance and sewage treatment process are used to evaluate the performance of the proposed model. Also, the performance index with weighted value is proposed to achieve a balance between the results of performance for the training and testing data.

  • PDF

Textual Inversion을 활용한 Adversarial Prompt 생성 기반 Text-to-Image 모델에 대한 멤버십 추론 공격 (Membership Inference Attack against Text-to-Image Model Based on Generating Adversarial Prompt Using Textual Inversion)

  • 오윤주;박소희;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1111-1123
    • /
    • 2023
  • 최근 생성 모델이 발전함에 따라 생성 모델을 위협하는 연구도 활발히 진행되고 있다. 본 논문은 Text-to-Image 모델에 대한 멤버십 추론 공격을 위한 새로운 제안 방법을 소개한다. 기존의 Text-to-Image 모델에 대한 멤버십 추론 공격은 쿼리 이미지의 caption으로 단일 이미지를 생성하여 멤버십을 추론하였다. 반면, 본 논문은 Textual Inversion을 통해 쿼리 이미지에 personalization된 임베딩을 사용하고, Adversarial Prompt 생성 방법으로 여러 장의 이미지를 효과적으로 생성하는 멤버십 추론 공격을 제안한다. 또한, Text-to-Image 모델 중 주목받고 있는 Stable Diffusion 모델에 대한 멤버십 추론 공격을 최초로 진행하였으며, 최대 1.00의 Accuracy를 달성한다.

퍼지추론을 이용한 어류 활동상태 기반의 지능형 자동급이 모델 (Fish Activity State based an Intelligent Automatic Fish Feeding Model Using Fuzzy Inference)

  • 최한석;최정현;김영주;신영학
    • 한국콘텐츠학회논문지
    • /
    • 제20권10호
    • /
    • pp.167-176
    • /
    • 2020
  • 현재 국내에서 활용되고 있는 자동화된 어류 급이 장치는 특정 시간과 일정량의 사료를 시간에 맞추어 수조에 공급하는 방식이다. 이는 고령화되고 고가인 양식장 관리의 인건비는 줄일 수 있으나 양식 생산성에 결정적 요인이 되는 고가의 사료량을 지능적으로 적절히 조절하기는 매우 어렵다. 본 논문에서는 이러한 기존 자동급이 장치의 문제점을 해결하고, 양식장에서 어류의 성장률을 적절하게 유지하면서 사료 공급의 효율성을 극대화할 수 있는 퍼지추론 기반의 지능형 어류 자동 급이 모델인 FIIFF 추론 모델(Fuzzy Inference based Intelligent Fish Feeding Model)을 제안한다. 본 논문에서 제안하는 FIIFF 지능형 급이 추론모델은 양식어류의 현재 생육 환경 정보 및 실시간 활동 상태를 기반으로 급이량을 산출하기 때문에 사료 급이량 적절성이 매우 높다. 본 연구에서 제안한 FIIFF 추론 모델의 급이량 산출 실험 결과에서는 8개월 동안 양식장에서 실제 투입한 급이량보다 14.8%를 절감하는 효과를 보여준다.

Performance analysis of local exit for distributed deep neural networks over cloud and edge computing

  • Lee, Changsik;Hong, Seungwoo;Hong, Sungback;Kim, Taeyeon
    • ETRI Journal
    • /
    • 제42권5호
    • /
    • pp.658-668
    • /
    • 2020
  • In edge computing, most procedures, including data collection, data processing, and service provision, are handled at edge nodes and not in the central cloud. This decreases the processing burden on the central cloud, enabling fast responses to end-device service requests in addition to reducing bandwidth consumption. However, edge nodes have restricted computing, storage, and energy resources to support computation-intensive tasks such as processing deep neural network (DNN) inference. In this study, we analyze the effect of models with single and multiple local exits on DNN inference in an edge-computing environment. Our test results show that a single-exit model performs better with respect to the number of local exited samples, inference accuracy, and inference latency than a multi-exit model at all exit points. These results signify that higher accuracy can be achieved with less computation when a single-exit model is adopted. In edge computing infrastructure, it is therefore more efficient to adopt a DNN model with only one or a few exit points to provide a fast and reliable inference service.