• Title/Summary/Keyword: 베이지안 정보

Search Result 545, Processing Time 0.024 seconds

Exploring Feature Selection Methods for Effective Emotion Mining (효과적 이모션마이닝을 위한 속성선택 방법에 관한 연구)

  • Eo, Kyun Sun;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.3
    • /
    • pp.107-117
    • /
    • 2019
  • In the era of SNS, many people relies on it to express their emotions about various kinds of products and services. Therefore, for the companies eagerly seeking to investigate how their products and services are perceived in the market, emotion mining tasks using dataset from SNSs become important much more than ever. Basically, emotion mining is a branch of sentiment analysis which is based on BOW (bag-of-words) and TF-IDF. However, there are few studies on the emotion mining which adopt feature selection (FS) methods to look for optimal set of features ensuring better results. In this sense, this study aims to propose FS methods to conduct emotion mining tasks more effectively with better outcomes. This study uses Twitter and SemEval2007 dataset for the sake of emotion mining experiments. We applied three FS methods such as CFS (Correlation based FS), IG (Information Gain), and ReliefF. Emotion mining results were obtained from applying the selected features to nine classifiers. When applying DT (decision tree) to Tweet dataset, accuracy increases with CFS, IG, and ReliefF methods. When applying LR (logistic regression) to SemEval2007 dataset, accuracy increases with ReliefF method.

A comparison of synthetic data approaches using utility and disclosure risk measures (유용성과 노출 위험성 지표를 이용한 재현자료 기법 비교 연구)

  • Seongbin An;Trang Doan;Juhee Lee;Jiwoo Kim;Yong Jae Kim;Yunji Kim;Changwon Yoon;Sungkyu Jung;Dongha Kim;Sunghoon Kwon;Hang J Kim;Jeongyoun Ahn;Cheolwoo Park
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.2
    • /
    • pp.141-166
    • /
    • 2023
  • This paper investigates synthetic data generation methods and their evaluation measures. There have been increasing demands for releasing various types of data to the public for different purposes. At the same time, there are also unavoidable concerns about leaking critical or sensitive information. Many synthetic data generation methods have been proposed over the years in order to address these concerns and implemented in some countries, including Korea. The current study aims to introduce and compare three representative synthetic data generation approaches: Sequential regression, nonparametric Bayesian multiple imputations, and deep generative models. Several evaluation metrics that measure the utility and disclosure risk of synthetic data are also reviewed. We provide empirical comparisons of the three synthetic data generation approaches with respect to various evaluation measures. The findings of this work will help practitioners to have a better understanding of the advantages and disadvantages of those synthetic data methods.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Software Reliability Growth Modeling in the Testing Phase with an Outlier Stage (하나의 이상구간을 가지는 테스팅 단계에서의 소프트웨어 신뢰도 성장 모형화)

  • Park, Man-Gon;Jung, Eun-Yi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2575-2583
    • /
    • 1998
  • The productionof the highly relible softwae systems and theirs performance evaluation hae become important interests in the software industry. The software evaluation has been mainly carried out in ternns of both reliability and performance of software system. Software reliability is the probability that no software error occurs for a fixed time interval during software testing phase. These theoretical software reliability models are sometimes unsuitable for the practical testing phase in which a software error at a certain testing stage occurs by causes of the imperfect debugging, abnornal software correction, and so on. Such a certatin software testing stage needs to be considered as an outlying stage. And we can assume that the software reliability does not improve by means of muisance factor in this outlying testing stage. In this paper, we discuss Bavesian software reliability growth modeling and estimation procedure in the presence of an imidentitied outlying software testing stage by the modification of Jehnski Moranda. Also we derive the Bayes estimaters of the software reliability panmeters by the assumption of prior information under the squared error los function. In addition, we evaluate the proposed software reliability growth model with an unidentified outlying stage in an exchangeable model according to the values of nuisance paramether using the accuracy, bias, trend, noise metries as the quantilative evaluation criteria through the compater simulation.

  • PDF

A Study of the Seocheon Fireball Explosion on September 23, 2020 (2020년 9월 23일 서천 화구 폭발 관측 연구)

  • Che, Il-Young;Kim, Inho
    • Journal of the Korean earth science society
    • /
    • v.42 no.6
    • /
    • pp.688-699
    • /
    • 2021
  • On September 23, 2020, at 1:39 a.m., a bright fireball above Seocheon was observed across the country. Two fireball explosions were identified in the images of the All-Sky Camera (ASC), and the shock waves were recorded at seismic and infrasound stations in the southwestern Korean Peninsula. The location of the explosion was estimated by a Bayesian-based location method using the arrival times of the fireball-associated seismic and infrasound signals at 17 stations. Realistic azimuth- and rang-dependent propagation speeds of sound waves were incorporated into the location method to increase the reliability of the results. The location of the sound source was found to be 36.050°N, 126.855°E at an altitude of 35 km, which was close to the location of the second fireball explosion. The two explosions were identified as sequential infrasound arrivals at local infrasound stations. Simulations of waveforms for long ranges explain the detection results at distant infrasound stations, up to ~266 km from the sound source. The dominant period of the signals recorded at five infrasound stations is about 0.4 s. A period-energy relation suggests the explosion energy was equivalent to ~0.3 ton of TNT.