• Title/Summary/Keyword: Rule-Based Learning

Search Result 390, Processing Time 0.029 seconds

An Analysis of the Scientific Problem Solving Strategies according to Knowledge Levels of the Gifted Students (영재학생들의 지식수준에 따른 과학적 문제해결 전략 분석)

  • Kim, Chunwoong;Chung, Jungin
    • Journal of Korean Elementary Science Education
    • /
    • v.38 no.1
    • /
    • pp.73-86
    • /
    • 2019
  • The purpose of this study is to investigate the characteristics of problem solving strategies that gifted students use in science inquiry problem. The subjects of the study are the notes and presentation materials that the 15 team of elementary and junior high school students have solved the problem. They are a team consisting of 27 elementary gifted and 29 middle gifted children who voluntarily selected topics related to dimple among the various inquiry themes. The analysis data are the observations of the subjects' inquiry process, the notes recorded in the inquiry process, and the results of the presentations. In this process, the knowledge related to dimple is classified into the declarative knowledge level and the process knowledge level, and the strategies used by the gifted students are divided into general strategy and supplementary strategy. The results of this study are as follows. First, as a result of categorizing gifted students into knowledge level, six types of AA, AB, BA, BB, BC, and CB were found among the 9 types of knowledge level. Therefore, gifted students did not have a high declarative knowledge level (AC type) or very low level of procedural knowledge level (CA type). Second, the general strategy that gifted students used to solve the dimple problem was using deductive reasoning, inductive reasoning, finding the rule, solving the problem in reverse, building similar problems, and guessing & reviewing strategies. The supplementary strategies used to solve the dimple problem was finding clues, recording important information, using tables and graphs, making tools, using pictures, and thinking experiment strategies. Third, the higher the knowledge level of gifted students, the more common type of strategies they use. In the case of supplementary strategy, it was not related to each type according to knowledge level. Knowledge-based learning related to problem situations can be helpful in understanding, interpreting, and representing problems. In a new problem situation, more problem solving strategies can be used to solve problems in various ways.

Cryptocurrency Recommendation Model using the Similarity and Association Rule Mining (유사도와 연관규칙분석을 이용한 암호화폐 추천모형)

  • Kim, Yechan;Kim, Jinyoung;Kim, Chaerin;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.287-308
    • /
    • 2022
  • The explosive growth of cryptocurrency, led by Bitcoin has emerged as a major issue in the financial market recently. As a result, interest in cryptocurrency investment is increasing, but the market opens 24 hours and 365 days a year, price volatility, and exponentially increasing number of cryptocurrencies are provided as risks to cryptocurrency investors. For that reasons, It is raising the need for research to reduct investors' risks by dividing cryptocurrency which is not suitable for recommendation. Unlike the previous studies of maximizing returns by simply predicting the future of cryptocurrency prices or constructing cryptocurrency portfolios by focusing on returns, this paper reflects the tendencies of investors and presents an appropriate recommendation method with interpretation that can reduct investors' risks by selecting suitable Altcoins which are recommended using Apriori algorithm, one of the machine learning techniques, but based on the similarity and association rules of Bitocoin.

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Improving the Classification of Population and Housing Census with AI: An Industry and Job Code Study

  • Byung-Il Yun;Dahye Kim;Young-Jin Kim;Medard Edmund Mswahili;Young-Seob Jeong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.21-29
    • /
    • 2023
  • In this paper, we propose an AI-based system for automatically classifying industry and occupation codes in the population census. The accurate classification of industry and occupation codes is crucial for informing policy decisions, allocating resources, and conducting research. However, this task has traditionally been performed by human coders, which is time-consuming, resource-intensive, and prone to errors. Our system represents a significant improvement over the existing rule-based system used by the statistics agency, which relies on user-entered data for code classification. In this paper, we trained and evaluated several models, and developed an ensemble model that achieved an 86.76% match accuracy in industry and 81.84% in occupation, outperforming the best individual model. Additionally, we propose process improvement work based on the classification probability results of the model. Our proposed method utilizes an ensemble model that combines transfer learning techniques with pre-trained models. In this paper, we demonstrate the potential for AI-based systems to improve the accuracy and efficiency of population census data classification. By automating this process with AI, we can achieve more accurate and consistent results while reducing the workload on agency staff.

Development of Intelligent Internet Shopping Mall Supporting Tool Based on Software Agents and Knowledge Discovery Technology (소프트웨어 에이전트 및 지식탐사기술 기반 지능형 인터넷 쇼핑몰 지원도구의 개발)

  • 김재경;김우주;조윤호;김제란
    • Journal of Intelligence and Information Systems
    • /
    • v.7 no.2
    • /
    • pp.153-177
    • /
    • 2001
  • Nowadays, product recommendation is one of the important issues regarding both CRM and Internet shopping mall. Generally, a recommendation system tracks past actions of a group of users to make a recommendation to individual members of the group. The computer-mediated marketing and commerce have grown rapidly and thereby automatic recommendation methodologies have got great attentions. But the researches and commercial tools for product recommendation so far, still have many aspects that merit further considerations. To supplement those aspects, we devise a recommendation methodology by which we can get further recommendation effectiveness when applied to Internet shopping mall. The suggested methodology is based on web log information, product taxonomy, association rule mining, and decision tree learning. To implement this we also design and intelligent Internet shopping mall support system based on agent technology and develop it as a prototype system. We applied this methodology and the prototype system to a leading Korean Internet shopping mall and provide some experimental results. Through the experiment, we found that the suggested methodology can perform recommendation tasks both effectively and efficiently in real world problems. Its systematic validity issues are also discussed.

  • PDF

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.

PDA Personalized Agent System (PDA용 개인화 에이전트 시스템)

  • 표석진;박영택
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.345-352
    • /
    • 2002
  • 무선 인터넷을 이용하는 사용자는 정보의 양의 따른 시간적 통신비용의 증가 문제로 개인화 에이전트가 사용자의 관심에 따라 서비스를 제공하는 기능과 맞춤화된 정보를 제공하는 기능, 지식 기반 방식으로 정보를 예측하는 기능을 가지기를 바라고 있다. 본 논문에서는 이와 같이 무선 인터넷을 사용하는 사용자를 위한 PDA 개인화 에이전트 시스템을 구축하고자 한다. PDA 개인화 에이전트 시스템 구축을 위해 프로파일 기반의 에이전트 엔진과 사용자 프로파일을 이용한 지식기반 방식을 사용한다. 사용자가 웹페이지에서 행하는 행위들을 모니터링하여 사용자가 관심 가지는 문서를 파악하고 정보 검색을 통해 얻어진 문서를 분석하여 사용자 각각의 관심 문서로 나누어 서비스하게 된다. 모니터링 되어진 문서를 효과적으로 분석하기 위해 unsupervised clustering 기계학습 방식인 Cobweb을 이용한다. unsupervised 기계 학습은 conceptual 방식을 이용하여 검색되어진 정보를 사용자의 관심 분야별로 clustering한다. 클러스터링을 통해 얻어진 결과를 다시 기계학습을 통해 사용자 관심문서에 대한 프로파일을 생성하게 된다. 이렇게 만들어진 프로파일을 룰(Rule)로 만들어 이를 기반으로 사용자에게 서비스하게 된다. 이러한 룰은 사용자의 모니터링 결과로 얻어지기 때문에 주기적으로 업데이트하게 된다. 제안하는 시스템은 인터넷신문이나 웹진 등에서 사용자들에게 뉴스를 전달하기 위한 목적으로 생성하는 뉴스문서를 특정 대상으로 선정하였고 사용자 정보를 이용한 검색을 실시하고 결과로 얻어진 정보를 정보 분류를 통해 PDA나 휴대폰을 통해 사용자에게 제공한다. 상품을 검색하기 위한 검색노력을 줄이고, 검색된 대안들로부터 구매자와 시스템이 웹상에서 서로 상호작용(interactivity) 하여 해를 찾고, 제약조건과 규칙들에 의해 적합한 해를 찾아가는 방법을 제시한다. 본 논문은 구성기반 예로서 컴퓨터 부품조립을 사용해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of compu

  • PDF