• Title/Summary/Keyword: Inference system

Search Result 1,627, Processing Time 0.029 seconds

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Auxiliary Reinforcement Method for the Safety of Tunnelling Face (터널 막장안정성에 따른 보강공법 적용)

  • Kim, Chang-Yong;Park, Chi-Hyun;Bae, Gyu-Jin;Hong, Sung-Wan;Oh, Myung-Ryul
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.2 no.2
    • /
    • pp.11-21
    • /
    • 2000
  • Tunnelling has been created as a great extent in view of less land space available because the growth of population in metropolitan has been accelerated at a faster pace than the development of the cities. In tunnelling, it is often faced that measures are obliged to be taken without confirmation for such abnormality as diverged movement of surrounding rock mass, growing crack of shotcrete and yielding of rockbolts. In this case, it is usually said that the judgments of experienced engineers for the selection of measure are importance and allowed us to get over the situations in many construction sites. But decrease of such experienced engineers need us to develop the new system to assist the selection of measures for the abnormality without any experiences of similar tunnelling sites. In this study, After a lot of tunnelling reinforcement methods were surveyed and the detail application were studied, an expert system was developed to predict the safety of tunnel and choose proper tunnel reinforcement system using fuzzy quantification theory and fuzzy inference rule based on tunnel information database. The expert system developed in this study have two main parts named pre-module and post-module. Pre-module decides tunnel information imput items based on the tunnel face mapping information which can be easily obtained in-situ site. Then, using fuzzy quantification theory II, fuzzy membership function is composed and tunnel safety level is inferred through this membership function. The comparison result between the predicted reinforcement system level and measured ones was very similar. In-situ data were obtained in three tunnel sites including subway tunnel under Han river. This system will be very helpful to make the most of in-situ data and suggest proper applicability of tunnel reinforcement system developing more resonable tunnel support method from dependance of some experienced experts for the absent of guide.

  • PDF

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Inferential Structure and Reality Problem in Diagnosis of Oriental Medicine (한의 진단의 추론형식과 실재성)

  • Park, Geong-Mo;Choi, Seong-Hoon;Ahn, Gyu-Seok
    • Journal of The Association for Neo Medicine
    • /
    • v.2 no.1
    • /
    • pp.55-84
    • /
    • 1997
  • Inferential structure and reality problem is a serious issue to O.M.(oriental medicine). The study will analyze this issue through a philosophical and historical comparative study of W.M.M(Western modern medicine) and O.M. First, I presuppose some basic ideas. The first is the division of the 'the philosophy of medicine' and 'the medicine itself'. Second, there is a 'visibility' that discriminate between 'the abstractive concept' and 'the concrete object' in diagnostic terminology. The third is the separation of disease, the entity and disease, the phenomenon. Finally, the distinction between the cause of disease and the nature of disease. Through these basic concepts, this study will analyze O.M's diagnostic methodology, 'Pattern identification of the S.A.S(sign and symptom)'. The results are follows: 1. O.M's views disease as a phenomenon. So, the S.A.S, which is visible, is the disease itself. Tough the analysis and inference of the S.A.S, 證(zheng) the essence is derived. 2. 證(zheng) can be considered as 'the abstractive concept' reflecting the essence of a disease. 3. 證(zheng) is not arrived through causal sequence reasoning but rather by analogical reasoning. 4. 證(zheng) is 'the non-random correlative combination of S.A.S', pattern. These patterns secure the abstractive deduction in reality. that is, The causality, the positivism, the view of disease as entity, and anatomical knowledge are the traits peculiar to W.M.M. But, these properties can not be applied universally to every medical systems. Also, these properties do not indicate the superiority or inferiority of any medical system. 5. 證(zheng) summarizes the patients condition simultaneously with the S.A.S. However, 證(zheng) doesn't necessarily indicate the knowledge about the actual internal organ. That is, Early in O.M.'s history, the diagnostic terminologies including 證(zheng) were analogical reflections of a naive knowledge of internal organs and external environmental factors. Later, the naive knowledge in 證(zheng) changed int new nature, an abstractive concept. The confusion of the concept of disease, the indiscriminate acceptance of Western anatomical knowledge, and the O.M.'s theoretical evolution et are the challenge facing modern O.M. To find solutions, this study looks at the sequence of the birth of W.M.M. and then compares it's system with the O.M. system. The confusion of the concept of disease, the indiscriminate acceptance of Western anatomical knowledge, and the O.M.'s theoretical evolution et are the challenge facing modern O.M. To find solutions, this study looks at the sequence of the birth of W.M.M. and then compares it's system with the O.M. system. It is recommended that O.M. diagnostics should pay close attention to the ambiguity of the diagnostic methodology in order to further development. At present time, the concept and the system peculiar to O.M. can not be explained by common language. but O.M.'s practitioner can not persist in this manner an: longer. Along with the internal development of O.M., the adjustment of O.M.'s diagnostic terminology needs to be adopted.

  • PDF

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Toxicity and Endocrine Disrupting Effect of Parabens (파라벤류의 독성과 내분비계장애 효과)

  • Ahn, Hae-Sun;Nah, Won-Heum;Lee, Jae-Eun;Oh, Yeong-Seok;Gye, Myung-Chan
    • Korean Journal of Environmental Biology
    • /
    • v.27 no.4
    • /
    • pp.323-333
    • /
    • 2009
  • Parabens are alkyl esters of p-hydroxybenzoic acid, which are widely used in foods, cosmetics, and pharmaceutic products as preservatives. Absorbed parabens are metabolized fastly and excreted. Actually human body is exposed to complex mixture of parabens. Safety assessment at various toxicological end points revealed parabens have a little acute, subacute and chronic toxicities. Some reports have argued that as parabens have estrogenic activity, they are associated with the incidence of breast cancer through dermal absorption by cosmetics. There is an inference that antiandrogenic activity of parabens may give rise to a lesion of male reproductive system, but also there is an contrary. At cellular level, parabens may inhibit mitochondrial function of sperms and androgen production in testis, but also there is an contrary. Parabens seem to have little or no toxicity in embryonic development. Parabens can cause hemolysis, membrane permeability change in mitochondria and apoptosis, suggesting cellular toxicity of parabens. Parabens evoked endocrine disruption in several fish species and have toxic effect on small invertebrates and microbes. Therefore, the toxicity of parabens should be considered as a potentially toxic chemical in the freshwater environment. In conclusion, though parabens may be considered as a low toxic chemical, more definite data are required concerning the endocrine disrupting effect of parabens on human body and aquatic animals according to route and term of exposure as well as the residual concentration of parabens.

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.

A Scalable OWL Horst Lite Ontology Reasoning Approach based on Distributed Cluster Memories (분산 클러스터 메모리 기반 대용량 OWL Horst Lite 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.307-319
    • /
    • 2015
  • Current ontology studies use the Hadoop distributed storage framework to perform map-reduce algorithm-based reasoning for scalable ontologies. In this paper, however, we propose a novel approach for scalable Web Ontology Language (OWL) Horst Lite ontology reasoning, based on distributed cluster memories. Rule-based reasoning, which is frequently used for scalable ontologies, iteratively executes triple-format ontology rules, until the inferred data no longer exists. Therefore, when the scalable ontology reasoning is performed on computer hard drives, the ontology reasoner suffers from performance limitations. In order to overcome this drawback, we propose an approach that loads the ontologies into distributed cluster memories, using Spark (a memory-based distributed computing framework), which executes the ontology reasoning. In order to implement an appropriate OWL Horst Lite ontology reasoning system on Spark, our method divides the scalable ontologies into blocks, loads each block into the cluster nodes, and subsequently handles the data in the distributed memories. We used the Lehigh University Benchmark, which is used to evaluate ontology inference and search speed, to experimentally evaluate the methods suggested in this paper, which we applied to LUBM8000 (1.1 billion triples, 155 gigabytes). When compared with WebPIE, a representative mapreduce algorithm-based scalable ontology reasoner, the proposed approach showed a throughput improvement of 320% (62k/s) over WebPIE (19k/s).

Automatic TV Program Recommendation using LDA based Latent Topic Inference (LDA 기반 은닉 토픽 추론을 이용한 TV 프로그램 자동 추천)

  • Kim, Eun-Hui;Pyo, Shin-Jee;Kim, Mun-Churl
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.270-283
    • /
    • 2012
  • With the advent of multi-channel TV, IPTV and smart TV services, excessive amounts of TV program contents become available at users' sides, which makes it very difficult for TV viewers to easily find and consume their preferred TV programs. Therefore, the service of automatic TV recommendation is an important issue for TV users for future intelligent TV services, which allows to improve access to their preferred TV contents. In this paper, we present a recommendation model based on statistical machine learning using a collaborative filtering concept by taking in account both public and personal preferences on TV program contents. For this, users' preference on TV programs is modeled as a latent topic variable using LDA (Latent Dirichlet Allocation) which is recently applied in various application domains. To apply LDA for TV recommendation appropriately, TV viewers's interested topics is regarded as latent topics in LDA, and asymmetric Dirichlet distribution is applied on the LDA which can reveal the diversity of the TV viewers' interests on topics based on the analysis of the real TV usage history data. The experimental results show that the proposed LDA based TV recommendation method yields average 66.5% with top 5 ranked TV programs in weekly recommendation, average 77.9% precision in bimonthly recommendation with top 5 ranked TV programs for the TV usage history data of similar taste user groups.

Semantic Inference System Using Backward Chaining (후방향 추론기법을 이용한 시멘틱 추론 시스템)

  • 함영경;박영택
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.97-99
    • /
    • 2003
  • 대부분의 웹 문서들은 HTML이나 XML로 표현된 웹의 정보들은 Syntactic 구조를 기반으로 표현되기 때문에, 소프트웨어가 정보를 처리하는데 한계가 있다. HTML은 문서의 display안을 위한 tag기반의 문서 표현 방식이고, XML은 문서의 구조를 사람이 이해하기 쉽도록 제안된 표현 방식이기 때문이다. 따라서, HTML 및 XML로 표현된 정보들을 가지고 서비스를 제공하는 웹 에이전트들은 사용자들에게 의미있는 서비스를 제공하기 위해 오프라인 상에서 많은 수작업을 수행해야만 했다. 이와 같은 문제점을 극복하기 위해서 미국과 유럽에서는 시멘틱 웹에 대한 연구를 활발히 진행하고 있다. 시멘틱 웹은 기존의 웹과는 달리 소프트웨어가 이해하고 처리 할 수 있는 형태(machine processable)로 정보를 표현하기 때문에 오프라인 상에서 수행되던 많은 작업들을 에이전트가 이해하고 처리할 수 있게 되었다. 그러나. 온톨로지를 구축하는 과정에서도 필연적으로 정보의 31(Incorrect, incomplete, Inconsistence)가 나타나고, 서비스의 결과 또한 온톨로지에 의해 좌우된다는 단점이 있다. 본 논문에서 제안하는 후방향 추론기법을 이용한 추론엔진은 다음과 같은 시스템을 제안한다. 첫째. 시멘틱 웹을 이용함으로써 소프트웨어 에이전트의 자동화 시스템을 제안한다. 둘째 은톨로지 정보의 한계성을 극복하기 위해 규칙기반의 후방향 추론 기법을 사용하는 시멘틱 추론엔진을 제안한다. 본 논문에서 제안하는 후방향 추론기법을 이용한 시멘틱 추론시스템은 사용자의 질의를 입력받아. 온톨로지와 시멘틱 웹 문서의 정보를 이용하여 후방향 추론을 수행함으로써 웹 정보의 불완전성을 완화하고, 온톨로지의 영향력를 감소시킴으로써 웹 서비스의 질을 향상시키는데 목적이 있다.RED에 비해 향상된 성능을 보여주었다.웍스 네트워크상의 다양한 디바이스들간의 네트워크 다양화와 분산화 기능을 얻을 수 있었고, 기존의 고가의 해외 솔루션인 Echelon사의 LonMaker 소프트웨어를 사용하지 않고도 국내의 순수 솔루션인 리눅스 기반의 LonWare 3.0 다중 바인딩 기능을 통해 저 비용으로 홈 네트워크 구성 관리 서버 시스템 개발에 대한 비용을 줄일 수 있다. 기대된다.e 함량이 대체로 높게 나타났다. 점미가 수가용성분에서 goucose대비 용출함량이 고르게 나타나는 경향을 보였고 흑미는 알칼리가용분에서 glucose가 상당량(0.68%) 포함되고 있음을 보여주었고 arabinose(0.68%), xylose(0.05%)도 다른 종류에 비해서 다량 함유한 것으로 나타났다. 흑미는 총식이섬유 함량이 높고 pectic substances, hemicellulose, uronic acid 함량이 높아서 콜레스테롤 저하 등의 효과가 기대되며 고섬유식품으로서 조리 특성 연구가 필요한 것으로 사료된다.리하였다. 얻어진 소견(所見)은 다음과 같았다. 1. 모년령(母年齡), 임신회수(姙娠回數), 임신기간(姙娠其間), 출산시체중등(出産時體重等)의 제요인(諸要因)은 주산기사망(周産基死亡)에 대(對)하여 통계적(統計的)으로 유의(有意)한 영향을 미치고 있어 $25{\sim}29$세(歲)의 연령군에서, 2번째 임신과 2번째의 출산에서 그리고 만삭의 임신 기간에, 출산시체중(出産時體重) $3.50{\sim}3.99kg$사이의 아이에서 그 주산기사망률(周産基死亡率)이 각각 가장 낮았다. 2. 사산(死産)과 초생아사망(初生兒死亡)을 구분(區分)하여 고려해 볼때 사산(死産)은 모성(母性)의 임신력(姙娠歷)과 매우 밀접한 관련이 있는 것으

  • PDF