• Title/Summary/Keyword: MachineLearning

Search Result 5,654, Processing Time 0.036 seconds

Hybrid Approach to Sentiment Analysis based on Syntactic Analysis and Machine Learning (구문분석과 기계학습 기반 하이브리드 텍스트 논조 자동분석)

  • Hong, Mun-Pyo;Shin, Mi-Young;Park, Shin-Hye;Lee, Hyung-Min
    • Language and Information
    • /
    • v.14 no.2
    • /
    • pp.159-181
    • /
    • 2010
  • This paper presents a hybrid approach to the sentiment analysis of online texts. The sentiment of a text refers to the feelings that the author of a text has towards a certain topic. Many existing approaches employ either a pattern-based approach or a machine learning based approach. The former shows relatively high precision in classifying the sentiments, but suffers from the data sparseness problem, i.e. the lack of patterns. The latter approach shows relatively lower precision, but 100% recall. The approach presented in the current work adopts the merits of both approaches. It combines the pattern-based approach with the machine learning based approach, so that the relatively high precision and high recall can be maintained. Our experiment shows that the hybrid approach improves the F-measure score for more than 50% in comparison with the pattern-based approach and for around 1% comparing with the machine learning based approach. The numerical improvement from the machine learning based approach might not seem to be quite encouraging, but the fact that in the current approach not only the sentiment or the polarity information of sentences but also the additional information such as target of sentiments can be classified makes the current approach promising.

  • PDF

Evaluation of geological conditions and clogging of tunneling using machine learning

  • Bai, Xue-Dong;Cheng, Wen-Chieh;Ong, Dominic E.L.;Li, Ge
    • Geomechanics and Engineering
    • /
    • v.25 no.1
    • /
    • pp.59-73
    • /
    • 2021
  • There frequently exists inadequacy regarding the number of boreholes installed along tunnel alignment. While geophysical imaging techniques are available for pre-tunnelling geological characterization, they aim to detect specific object (e.g., water body and karst cave). There remains great motivation for the industry to develop a real-time identification technology relating complex geological conditions with the existing tunnelling parameters. This study explores the potential for the use of machine learning-based data driven approaches to identify the change in geology during tunnel excavation. Further, the feasibility for machine learning-based anomaly detection approaches to detect the development of clayey clogging is also assessed. The results of an application of the machine learning-based approaches to Xi'an Metro line 4 are presented in this paper where two tunnels buried in the water-rich sandy soils at depths of 12-14 m are excavated using a 6.288 m diameter EPB shield machine. A reasonable agreement with the measurements verifies their applicability towards widening the application horizon of machine learning-based approaches.

Effectiveness of Normalization Pre-Processing of Big Data to the Machine Learning Performance (빅데이터의 정규화 전처리과정이 기계학습의 성능에 미치는 영향)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.3
    • /
    • pp.547-552
    • /
    • 2019
  • Recently, the massive growth in the scale of data has been observed as a major issue in the Big Data. Furthermore, the Big Data should be preprocessed for normalization to get a high performance of the Machine learning since the Big Data is also an input of Machine Learning. The performance varies by many factors such as the scope of the columns in a Big Data or the methods of normalization preprocessing. In this paper, the various types of normalization preprocessing methods and the scopes of the Big Data columns will be applied to the SVM(: Support Vector Machine) as a Machine Learning method to get the efficient environment for the normalization preprocessing. The Machine Learning experiment has been programmed in Python and the Jupyter Notebook.

Machine learning-based prediction of wind forces on CAARC standard tall buildings

  • Yi Li;Jie-Ting Yin;Fu-Bin Chen;Qiu-Sheng Li
    • Wind and Structures
    • /
    • v.36 no.6
    • /
    • pp.355-366
    • /
    • 2023
  • Although machine learning (ML) techniques have been widely used in various fields of engineering practice, their applications in the field of wind engineering are still at the initial stage. In order to evaluate the feasibility of machine learning algorithms for prediction of wind loads on high-rise buildings, this study took the exposure category type, wind direction and the height of local wind force as the input features and adopted four different machine learning algorithms including k-nearest neighbor (KNN), support vector machine (SVM), gradient boosting regression tree (GBRT) and extreme gradient (XG) boosting to predict wind force coefficients of CAARC standard tall building model. All the hyper-parameters of four ML algorithms are optimized by tree-structured Parzen estimator (TPE). The result shows that mean drag force coefficients and RMS lift force coefficients can be well predicted by the GBRT algorithm model while the RMS drag force coefficients can be forecasted preferably by the XG boosting algorithm model. The proposed machine learning based algorithms for wind loads prediction can be an alternative of traditional wind tunnel tests and computational fluid dynamic simulations.

Unsupervised Machine Learning based on Neighborhood Interaction Function for BCI(Brain-Computer Interface) (BCI(Brain-Computer Interface)에 적용 가능한 상호작용함수 기반 자율적 기계학습)

  • Kim, Gui-Jung;Han, Jung-Soo
    • Journal of Digital Convergence
    • /
    • v.13 no.8
    • /
    • pp.289-294
    • /
    • 2015
  • This paper proposes an autonomous machine learning method applicable to the BCI(Brain-Computer Interface) is based on the self-organizing Kohonen method, one of the exemplary method of unsupervised learning. In addition we propose control method of learning region and self machine learning rule using an interactive function. The learning region control and machine learning was used to control the side effects caused by interaction function that is based on the self-organizing Kohonen method. After determining the winner neuron, we decided to adjust the connection weights based on the learning rules, and learning region is gradually decreased as the number of learning is increased by the learning. So we proposed the autonomous machine learning to reach to the network equilibrium state by reducing the flow toward the input to weights of output layer neurons.

Stress Level Based Emotion Classification Using Hybrid Deep Learning Algorithm

  • Sivasankaran Pichandi;Gomathy Balasubramanian;Venkatesh Chakrapani
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.3099-3120
    • /
    • 2023
  • The present fast-moving era brings a serious stress issue that affects elders and youngsters. Everyone has undergone stress factors at least once in their lifetime. Stress is more among youngsters as they are new to the working environment. whereas the stress factors for elders affect the individual and overall performance in an organization. Electroencephalogram (EEG) based stress level classification is one of the widely used methodologies for stress detection. However, the signal processing methods evolved so far have limitations as most of the stress classification models compute the stress level in a predefined environment to detect individual stress factors. Specifically, machine learning based stress classification models requires additional algorithm for feature extraction which increases the computation cost. Also due to the limited feature learning characteristics of machine learning algorithms, the classification performance reduces and inaccurate sometimes. It is evident from numerous research works that deep learning models outperforms machine learning techniques. Thus, to classify all the emotions based on stress level in this research work a hybrid deep learning algorithm is presented. Compared to conventional deep learning models, hybrid models outperforms in feature handing. Better feature extraction and selection can be made through deep learning models. Adding machine learning classifiers in deep learning architecture will enhance the classification performances. Thus, a hybrid convolutional neural network model was presented which extracts the features using CNN and classifies them through machine learning support vector machine. Simulation analysis of benchmark datasets demonstrates the proposed model performances. Finally, existing methods are comparatively analyzed to demonstrate the better performance of the proposed model as a result of the proposed hybrid combination.

Study on Machine Learning Techniques for Malware Classification and Detection

  • Moon, Jaewoong;Kim, Subin;Song, Jaeseung;Kim, Kyungshin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4308-4325
    • /
    • 2021
  • The importance and necessity of artificial intelligence, particularly machine learning, has recently been emphasized. In fact, artificial intelligence, such as intelligent surveillance cameras and other security systems, is used to solve various problems or provide convenience, providing solutions to problems that humans traditionally had to manually deal with one at a time. Among them, information security is one of the domains where the use of artificial intelligence is especially needed because the frequency of occurrence and processing capacity of dangerous codes exceeds the capabilities of humans. Therefore, this study intends to examine the definition of artificial intelligence and machine learning, its execution method, process, learning algorithm, and cases of utilization in various domains, particularly the cases and contents of artificial intelligence technology used in the field of information security. Based on this, this study proposes a method to apply machine learning technology to the method of classifying and detecting malware that has rapidly increased in recent years. The proposed methodology converts software programs containing malicious codes into images and creates training data suitable for machine learning by preparing data and augmenting the dataset. The model trained using the images created in this manner is expected to be effective in classifying and detecting malware.

Comparing automated and non-automated machine learning for autism spectrum disorders classification using facial images

  • Elshoky, Basma Ramdan Gamal;Younis, Eman M.G.;Ali, Abdelmgeid Amin;Ibrahim, Osman Ali Sadek
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.613-623
    • /
    • 2022
  • Autism spectrum disorder (ASD) is a developmental disorder associated with cognitive and neurobehavioral disorders. It affects the person's behavior and performance. Autism affects verbal and non-verbal communication in social interactions. Early screening and diagnosis of ASD are essential and helpful for early educational planning and treatment, the provision of family support, and for providing appropriate medical support for the child on time. Thus, developing automated methods for diagnosing ASD is becoming an essential need. Herein, we investigate using various machine learning methods to build predictive models for diagnosing ASD in children using facial images. To achieve this, we used an autistic children dataset containing 2936 facial images of children with autism and typical children. In application, we used classical machine learning methods, such as support vector machine and random forest. In addition to using deep-learning methods, we used a state-of-the-art method, that is, automated machine learning (AutoML). We compared the results obtained from the existing techniques. Consequently, we obtained that AutoML achieved the highest performance of approximately 96% accuracy via the Hyperpot and tree-based pipeline optimization tool optimization. Furthermore, AutoML methods enabled us to easily find the best parameter settings without any human efforts for feature engineering.

Machine Learning Language Model Implementation Using Literary Texts (문학 텍스트를 활용한 머신러닝 언어모델 구현)

  • Jeon, Hyeongu;Jung, Kichul;Kwon, Kyoungah;Lee, Insung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.427-436
    • /
    • 2021
  • The purpose of this study is to implement a machine learning language model that learns literary texts. Literary texts have an important characteristic that pairs of question-and-answer are not frequently clearly distinguished. Also, literary texts consist of pronouns, figurative expressions, soliloquies, etc. They hinder the necessity of machine learning using literary texts by making it difficult to learn algorithms. Algorithms that learn literary texts can show more human-friendly interactions than algorithms that learn general sentences. For this goal, this paper proposes three text correction tasks that must be preceded in researches using literary texts for machine learning language model: pronoun processing, dialogue pair expansion, and data amplification. Learning data for artificial intelligence should have clear meanings to facilitate machine learning and to ensure high effectiveness. The introduction of special genres of texts such as literature into natural language processing research is expected not only to expand the learning area of machine learning, but to show a new language learning method.

Goal-oriented Movement Reality-based Skeleton Animation Using Machine Learning

  • Yu-Won JEONG
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.267-277
    • /
    • 2024
  • This paper explores the use of machine learning in game production to create goal-oriented, realistic animations for skeleton monsters. The purpose of this research is to enhance realism by implementing intelligent movements in monsters within game development. To achieve this, we designed and implemented a learning model for skeleton monsters using reinforcement learning algorithms. During the machine learning process, various reward conditions were established, including the monster's speed, direction, leg movements, and goal contact. The use of configurable joints introduced physical constraints. The experimental method validated performance through seven statistical graphs generated using machine learning methods. The results demonstrated that the developed model allows skeleton monsters to move to their target points efficiently and with natural animation. This paper has implemented a method for creating game monster animations using machine learning, which can be applied in various gaming environments in the future. The year 2024 is expected to bring expanded innovation in the gaming industry. Currently, advancements in technology such as virtual reality, AI, and cloud computing are redefining the sector, providing new experiences and various opportunities. Innovative content optimized for this period is needed to offer new gaming experiences. A high level of interaction and realism, along with the immersion and fun it induces, must be established as the foundation for the environment in which these can be implemented. Recent advancements in AI technology are significantly impacting the gaming industry. By applying many elements necessary for game development, AI can efficiently optimize the game production environment. Through this research, We demonstrate that the application of machine learning to Unity and game engines in game development can contribute to creating more dynamic and realistic game environments. To ensure that VR gaming does not end as a mere craze, we propose new methods in this study to enhance realism and immersion, thereby increasing enjoyment for continuous user engagement.