• Title/Summary/Keyword: relevance network

Search Result 151, Processing Time 0.026 seconds

Analysis of Input Factors of DNN Forecasting Model Using Layer-wise Relevance Propagation of Neural Network (신경망의 계층 연관성 전파를 이용한 DNN 예보모델의 입력인자 분석)

  • Yu, SukHyun
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1122-1137
    • /
    • 2021
  • PM2.5 concentration in Seoul could be predicted by deep neural network model. In this paper, the contribution of input factors to the model's prediction results is analyzed using the LRP(Layer-wise Relevance Propagation) technique. LRP analysis is performed by dividing the input data by time and PM concentration, respectively. As a result of the analysis by time, the contribution of the measurement factors is high in the forecast for the day, and those of the forecast factors are high in the forecast for the tomorrow and the day after tomorrow. In the case of the PM concentration analysis, the contribution of the weather factors is high in the low-concentration pattern, and that of the air quality factors is high in the high-concentration pattern. In addition, the date and the temperature factors contribute significantly regardless of time and concentration.

Pullout capacity of small ground anchors: a relevance vector machine approach

  • Samui, Pijush;Sitharam, T.G.
    • Geomechanics and Engineering
    • /
    • v.1 no.3
    • /
    • pp.259-262
    • /
    • 2009
  • This paper examines the potential of relevance vector machine (RVM) in prediction of pullout capacity of small ground anchors. RVM is based on a Bayesian formulation of a linear model with an appropriate prior that results in a sparse representation. The results are compared with a widely used artificial neural network (ANN) model. Overall, the RVM showed good performance and is proven to be better than ANN model. It also estimates the prediction variance. The plausibility of RVM technique is shown by its superior performance in forecasting pullout capacity of small ground anchors providing exogenous knowledge.

Relevance Epistasis Network of Gastritis for Intra-chromosomes in the Korea Associated Resource (KARE) Cohort Study

  • Jeong, Hyun-hwan;Sohn, Kyung-Ah
    • Genomics & Informatics
    • /
    • v.12 no.4
    • /
    • pp.216-224
    • /
    • 2014
  • Gastritis is a common but a serious disease with a potential risk of developing carcinoma. Helicobacter pylori infection is reported as the most common cause of gastritis, but other genetic and genomic factors exist, especially single-nucleotide polymorphisms (SNPs). Association studies between SNPs and gastritis disease are important, but results on epistatic interactions from multiple SNPs are rarely found in previous genome-wide association (GWA) studies. In this study, we performed computational GWA case-control studies for gastritis in Korea Associated Resource (KARE) data. By transforming the resulting SNP epistasis network into a gene-gene epistasis network, we also identified potential gene-gene interaction factors that affect the susceptibility to gastritis.

User-oriented Paper Search System by Relative Network (상대네트워크 구축에 의한 맞춤형 논문검색 시스템 모델링)

  • Cho Young-Im;Kang Sang-Gil
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.285-290
    • /
    • 2006
  • In this paper we propose a novel personalized paper search system using the relevance among user's queried keywords and user's behaviors on a searched paper list. The proposed system builds user's individual relevance network from analyzing the appearance frequencies of keywords in the searched papers. The relevance network is personalized by providing weights to the appearance frequencies of keywords according to users' behaviors on the searched list, such as 'downloading,' 'opening,' and 'no-action.' In the experimental section, we demonstrate our method using 100 users' search information in the University of Suwon.

Software Effort Estimation in Rapidly Changing Computng Environment

  • Eung S. Jun;Lee, Jae K.
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.133-141
    • /
    • 2001
  • Since the computing environment changes very rapidly, the estimation of software effort is very difficult because it is not easy to collect a sufficient number of relevant cases from the historical data. If we pinpoint the cases, the number of cases becomes too small. However is we adopt too many cases, the relevance declines. So in this paper we attempt to balance the number of cases and relevance. Since many researches on software effort estimation showed that the neural network models perform at least as well as the other approaches, so we selected the neural network model as the basic estimator. We propose a search method that finds the right level of relevant cases for the neural network model. For the selected case set. eliminating the qualitative input factors with the same values can reduce the scale of the neural network model. Since there exists a multitude of combinations of case sets, we need to search for the optimal reduced neural network model and corresponding case, set. To find the quasi-optimal model from the hierarchy of reduced neural network models, we adopted the beam search technique and devised the Case-Set Selection Algorithm. This algorithm can be adopted in the case-adaptive software effort estimation systems.

  • PDF

Leak Detection in a Water Pipe Network Using the Principal Component Analysis (주성분 분석을 이용한 상수도 관망의 누수감지)

  • Park, Suwan;Ha, Jaehong;Kim, Kimin
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.276-276
    • /
    • 2018
  • In this paper the potential of the Principle Component Analysis(PCA) technique that can be used to detect leaks in water pipe network blocks was evaluated. For this purpose the PCA was conducted to evaluate the relevance of the calculated outliers of a PCA model utilizing the recorded pipe flows and the recorded pipe leak incidents of a case study water distribution system. The PCA technique was enhanced by applying the computational algorithms developed in this study. The algorithms were designed to extract a partial set of flow data from the original 24 hour flow data so that the variability of the flows in the determined partial data set are minimal. The relevance of the calculated outliers of a PCA model and the recorded pipe leak incidents was analyzed. The results showed that the effectiveness of detecting leaks may improve by applying the developed algorithm. However, the analysis suggested that further development on the algorithm is needed to enhance the applicability of the PCA in detecting leaks in real-world water pipe networks.

  • PDF

Software Effort Estimation Using Artificial Intelligence Approaches (인공지능 접근방법에 의한 S/W 공수예측)

  • Jun, Eung-Sup
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.616-623
    • /
    • 2003
  • Since the computing environment changes very rapidly, the estimation of software effort is very difficult because it is not easy to collect a sufficient number of relevant cases from the historical data. If we pinpoint the cases, the number of cases becomes too small. However if we adopt too many cases, the relevance declines. So in this paper we attempt to balance the number of cases and relevance. Since many researches on software effort estimation showed that the neural network models perform at least as well as the other approaches, so we selected the neural network model as the basic estimator. We propose a search method that finds the right level of relevant cases for the neural network model. For the selected case set, eliminating the qualitative input factors with the same values can reduce the scale of the neural network model. Since there exists a multitude of combinations of case sets, we need to search for the optimal reduced neural network model and corresponding case set. To find the quasi-optimal model from the hierarchy of reduced neural network models, we adopted the beam search technique and devised the Case-Set Selection Algorithm. This algorithm can be adopted in the case-adaptive software effort estimation systems.

  • PDF

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Analysis of Input Factors and Performance Improvement of DNN PM2.5 Forecasting Model Using Layer-wise Relevance Propagation (계층 연관성 전파를 이용한 DNN PM2.5 예보모델의 입력인자 분석 및 성능개선)

  • Yu, SukHyun
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1414-1424
    • /
    • 2021
  • In this paper, the importance of input factors of a DNN (Deep Neural Network) PM2.5 forecasting model using LRP(Layer-wise Relevance Propagation) is analyzed, and forecasting performance is improved. Input factor importance analysis is performed by dividing the learning data into time and PM2.5 concentration. As a result, in the low concentration patterns, the importance of weather factors such as temperature, atmospheric pressure, and solar radiation is high, and in the high concentration patterns, the importance of air quality factors such as PM2.5, CO, and NO2 is high. As a result of analysis by time, the importance of the measurement factors is high in the case of the forecast for the day, and the importance of the forecast factors increases in the forecast for tomorrow and the day after tomorrow. In addition, date, temperature, humidity, and atmospheric pressure all show high importance regardless of time and concentration. Based on the importance of these factors, the LRP_DNN prediction model is developed. As a result, the ACC(accuracy) and POD(probability of detection) are improved by up to 5%, and the FAR(false alarm rate) is improved by up to 9% compared to the previous DNN model.

Forecasting Open Government Data Demand Using Keyword Network Analysis (키워드 네트워크 분석을 이용한 공공데이터 수요 예측)

  • Lee, Jae-won
    • Informatization Policy
    • /
    • v.27 no.4
    • /
    • pp.24-46
    • /
    • 2020
  • This study proposes a way to timely forecast open government data (OGD) demand(i.e., OGD requests, search queries, etc.) by using keyword network analysis. According to the analysis results, most of the OGD belonging to the high-demand topics are provided by the domestic OGD portal(data.go.kr), while the OGD related to users' actual needs predicted through topic association analysis are rarely provided. This is because, when providing(or selecting) OGD, relevance to OGD topics takes precedence over relevance to users' OGD requests. The proposed keyword network analysis framework is expected to contribute to the establishment of OGD policies for public institutions in the future as it can quickly and easily forecast users' demand based on actual OGD requests.