• Title/Summary/Keyword: tree based learning

Search Result 435, Processing Time 0.023 seconds

A Study on the Prediction of Uniaxial Compressive Strength Classification Using Slurry TBM Data and Random Forest (이수식 TBM 데이터와 랜덤포레스트를 이용한 일축압축강도 분류 예측에 관한 연구)

  • Tae-Ho Kang;Soon-Wook Choi;Chulho Lee;Soo-Ho Chang
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.547-560
    • /
    • 2023
  • Recently, research on predicting ground classification using machine learning techniques, TBM excavation data, and ground data is increasing. In this study, a multi-classification prediction study for uniaxial compressive strength (UCS) was conducted by applying random forest model based on a decision tree among machine learning techniques widely used in various fields to machine data and ground data acquired at three slurry shield TBM sites. For the classification prediction, the training and test data were divided into 7:3, and a grid search including 5-fold cross-validation was used to select the optimal parameter. As a result of classification learning for UCS using a random forest, the accuracy of the multi-classification prediction model was found to be high at both 0.983 and 0.982 in the training set and the test set, respectively. However, due to the imbalance in data distribution between classes, the recall was evaluated low in class 4. It is judged that additional research is needed to increase the amount of measured data of UCS acquired in various sites.

A Context Recognition System for Various Food Intake using Mobile and Wearable Sensor Data (모바일 및 웨어러블 센서 데이터를 이용한 다양한 식사상황 인식 시스템)

  • Kim, Kee-Hoon;Cho, Sung-Bae
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.531-540
    • /
    • 2016
  • Development of various sensors attached to mobile and wearable devices has led to increasing recognition of current context-based service to the user. In this study, we proposed a probabilistic model for recognizing user's food intake context, which can occur in a great variety of contexts. The model uses low-level sensor data from mobile and wrist-wearable devices that can be widely available in daily life. To cope with innate complexity and fuzziness in high-level activities like food intake, a context model represents the relevant contexts systematically based on 4 components of activity theory and 5 W's, and tree-structured Bayesian network recognizes the probabilistic state. To verify the proposed method, we collected 383 minutes of data from 4 people in a week and found that the proposed method outperforms the conventional machine learning methods in accuracy (93.21%). Also, we conducted a scenario-based test and investigated the effect contribution of individual components for recognition.

Development and application of a floor failure depth prediction system based on the WEKA platform

  • Lu, Yao;Bai, Liyang;Chen, Juntao;Tong, Weixin;Jiang, Zhe
    • Geomechanics and Engineering
    • /
    • v.23 no.1
    • /
    • pp.51-59
    • /
    • 2020
  • In this paper, the WEKA platform was used to mine and analyze measured data of floor failure depth and a prediction system of floor failure depth was developed with Java. Based on the standardization and discretization of 35-set measured data of floor failure depth in China, the grey correlation degree analysis on five factors affecting the floor failure depth was carried out. The correlation order from big to small is: mining depth, working face length, floor failure resistance, mining thickness, dip angle of coal seams. Naive Bayes model, neural network model and decision tree model were used for learning and training, and the accuracy of the confusion matrix, detailed accuracy and node error rate were analyzed. Finally, artificial neural network was concluded to be the optimal model. Based on Java language, a prediction system of floor failure depth was developed. With the easy operation in the system, the prediction from measured data and error analyses were performed for nine sets of data. The results show that the WEKA prediction formula has the smallest relative error and the best prediction effect. Besides, the applicability of WEKA prediction formula was analyzed. The results show that WEKA prediction has a better applicability under the coal seam mining depth of 110 m~550 m, dip angle of coal seams of 0°~15° and working face length of 30 m~135 m.

A Comparative Study on Game-Score Prediction Models Using Compuational Thinking Education Game Data (컴퓨팅 사고 교육 게임 데이터를 사용한 게임 점수 예측 모델 성능 비교 연구)

  • Yang, Yeongwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.529-534
    • /
    • 2021
  • Computing thinking is regarded as one of the important skills required in the 21st century, and many countries have introduced and implemented computing thinking training courses. Among computational thinking education methods, educational game-based methods increase student participation and motivation, and increase access to computational thinking. Autothinking is an educational game developed for the purpose of providing computational thinking education to learners. It is an adaptive system that dynamically provides feedback to learners and automatically adjusts the difficulty according to the learner's computational thinking ability. However, because the game was designed based on rules, it cannot intelligently consider the computational thinking of learners or give feedback. In this study, game data collected through Autothikning is introduced, and game score prediction that reflects computational thinking is performed in order to increase the adaptability of the game by using it. To solve this problem, a comparative study was conducted on linear regression, decision tree, random forest, and support vector machine algorithms, which are most commonly used in regression problems. As a result of the study, the linear regression method showed the best performance in predicting game scores.

A Study on a Real-Time Aerial Image-Based UAV-USV Cooperative Guidance and Control Algorithm (실시간 항공영상 기반 UAV-USV 간 협응 유도·제어 알고리즘 개발)

  • Do-Kyun Kim;Jeong-Hyeon Kim;Hui-Hun Son;Si-Woong Choi;Dong-Han Kim;Chan Young Yeo;Jong-Yong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.5
    • /
    • pp.324-333
    • /
    • 2024
  • This paper focuses on the cooperation between Unmanned Aerial Vehicle (UAV) and Unmanned Surface Vessel (USV). It aims to develop efficient guidance and control algorithms for USV based on obstacle identification and path planning from aerial images captured by UAV. Various obstacle scenarios were implemented using the Robot Operating System (ROS) and the Gazebo simulation environment. The aerial images transmitted in real-time from UAV to USV are processed using the computer vision-based deep learning model, You Only Look Once (YOLO), to classify and recognize elements such as the water surface, obstacles, and ships. The recognized data is used to create a two-dimensional grid map. Algorithms such as A* and Rapidly-exploring Random Tree star (RRT*) were used for path planning. This process enhances the guidance and control strategies within the UAV-USV collaborative system, especially improving the navigational capabilities of the USV in complex and dynamic environments. This research offers significant insights into obstacle avoidance and path planning in maritime environments and proposes new directions for the integrated operation of UAV and USV.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

Classification of Urban Green Space Using Airborne LiDAR and RGB Ortho Imagery Based on Deep Learning (항공 LiDAR 및 RGB 정사 영상을 이용한 딥러닝 기반의 도시녹지 분류)

  • SON, Bokyung;LEE, Yeonsu;IM, Jungho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.3
    • /
    • pp.83-98
    • /
    • 2021
  • Urban green space is an important component for enhancing urban ecosystem health. Thus, identifying the spatial structure of urban green space is required to manage a healthy urban ecosystem. The Ministry of Environment has provided the level 3 land cover map(the highest (1m) spatial resolution map) with a total of 41 classes since 2010. However, specific urban green information such as street trees was identified just as grassland or even not classified them as a vegetated area in the map. Therefore, this study classified detailed urban green information(i.e., tree, shrub, and grass), not included in the existing level 3 land cover map, using two types of high-resolution(<1m) remote sensing data(i.e., airborne LiDAR and RGB ortho imagery) in Suwon, South Korea. U-Net, one of image segmentation deep learning approaches, was adopted to classify detailed urban green space. A total of three classification models(i.e., LRGB10, LRGB5, and RGB5) were proposed depending on the target number of classes and the types of input data. The average overall accuracies for test sites were 83.40% (LRGB10), 89.44%(LRGB5), and 74.76%(RGB5). Among three models, LRGB5, which uses both airborne LiDAR and RGB ortho imagery with 5 target classes(i.e., tree, shrub, grass, building, and the others), resulted in the best performance. The area ratio of total urban green space(based on trees, shrub, and grass information) for the entire Suwon was 45.61%(LRGB10), 43.47%(LRGB5), and 44.22%(RGB5). All models were able to provide additional 13.40% of urban tree information on average when compared to the existing level 3 land cover map. Moreover, these urban green classification results are expected to be utilized in various urban green studies or decision making processes, as it provides detailed information on urban green space.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

Development of disaster severity classification model using machine learning technique (머신러닝 기법을 이용한 재해강도 분류모형 개발)

  • Lee, Seungmin;Baek, Seonuk;Lee, Junhak;Kim, Kyungtak;Kim, Soojun;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.261-272
    • /
    • 2023
  • In recent years, natural disasters such as heavy rainfall and typhoons have occurred more frequently, and their severity has increased due to climate change. The Korea Meteorological Administration (KMA) currently uses the same criteria for all regions in Korea for watch and warning based on the maximum cumulative rainfall with durations of 3-hour and 12-hour to reduce damage. However, KMA's criteria do not consider the regional characteristics of damages caused by heavy rainfall and typhoon events. In this regard, it is necessary to develop new criteria considering regional characteristics of damage and cumulative rainfalls in durations, establishing four stages: blue, yellow, orange, and red. A classification model, called DSCM (Disaster Severity Classification Model), for the four-stage disaster severity was developed using four machine learning models (Decision Tree, Support Vector Machine, Random Forest, and XGBoost). This study applied DSCM to local governments of Seoul, Incheon, and Gyeonggi Province province. To develop DSCM, we used data on rainfall, cumulative rainfall, maximum rainfalls for durations of 3-hour and 12-hour, and antecedent rainfall as independent variables, and a 4-class damage scale for heavy rain damage and typhoon damage for each local government as dependent variables. As a result, the Decision Tree model had the highest accuracy with an F1-Score of 0.56. We believe that this developed DSCM can help identify disaster risk at each stage and contribute to reducing damage through efficient disaster management for local governments based on specific events.

Relation Extraction based on Extended Composite Kernel using Flat Lexical Features (평면적 어휘 자질들을 활용한 확장 혼합 커널 기반 관계 추출)

  • Chai, Sung-Pil;Jeong, Chang-Hoo;Chai, Yun-Soo;Myaeng, Sung-Hyon
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.8
    • /
    • pp.642-652
    • /
    • 2009
  • In order to improve the performance of the existing relation extraction approaches, we propose a method for combining two pivotal concepts which play an important role in classifying semantic relationships between entities in text. Having built a composite kernel-based relation extraction system, which incorporates both entity features and syntactic structured information of relation instances, we define nine classes of lexical features and synthetically apply them to the system. Evaluation on the ACE RDC corpus shows that our approach boosts the effectiveness of the existing composite kernels in relation extraction. It also confirms that by integrating the three important features (entity features, syntactic structures and contextual lexical features), we can improve the performance of a relation extraction process.