• Title/Summary/Keyword: machine learning applications

Search Result 538, Processing Time 0.032 seconds

An Improved Co-training Method without Feature Split (속성분할이 없는 향상된 협력학습 방법)

  • 이창환;이소민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1259-1265
    • /
    • 2004
  • In many applications, producing labeled data is costly and time consuming while an enormous amount of unlabeled data is available with little cost. Therefore, it is natural to ask whether we can take advantage of these unlabeled data in classification teaming. In machine learning literature, the co-training method has been widely used for this purpose. However, the current co-training method requires the entire features to be split into two independent sets. Therefore, in this paper, we improved the current co-training method in a number of ways, and proposed a new co-training method which do not need the feature split. Experimental results show that our proposed method can significantly improve the performance of the current co-training algorithm.

A Machine Learning-Driven Approach for Wildfire Detection Using Hybrid-Sentinel Data: A Case Study of the 2022 Uljin Wildfire, South Korea

  • Linh Nguyen Van;Min Ho Yeon;Jin Hyeong Lee;Gi Ha Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.175-175
    • /
    • 2023
  • Detection and monitoring of wildfires are essential for limiting their harmful effects on ecosystems, human lives, and property. In this research, we propose a novel method running in the Google Earth Engine platform for identifying and characterizing burnt regions using a hybrid of Sentinel-1 (C-band synthetic aperture radar) and Sentinel-2 (multispectral photography) images. The 2022 Uljin wildfire, the severest event in South Korean history, is the primary area of our investigation. Given its documented success in remote sensing and land cover categorization applications, we select the Random Forest (RF) method as our primary classifier. Next, we evaluate the performance of our model using multiple accuracy measures, including overall accuracy (OA), Kappa coefficient, and area under the curve (AUC). The proposed method shows the accuracy and resilience of wildfire identification compared to traditional methods that depend on survey data. These results have significant implications for the development of efficient and dependable wildfire monitoring systems and add to our knowledge of how machine learning and remote sensing-based approaches may be combined to improve environmental monitoring and management applications.

  • PDF

Improving Parsing Efficiency Using Chunking in Chinese-Korean Machine Translation (중한번역에서 구 묶음을 이용한 파싱 효율 개선)

  • 양재형;심광섭
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.1083-1091
    • /
    • 2004
  • This paper presents a chunking system employed as a preprocessing module to the parser in a Chinese to Korean machine translation system. The parser can benefit from the dependency information provided by the chunking module. The chunking system was implemented using transformation-based learning technique and an effective interface that conveys the dependency information to the parser was also devised. The module was integrated into the machine translation system and experiments were performed with corpuses collected from Chinese websites. The experimental results show the introduction of chunking module provides noticeable improvements in the parser's performance.

Neural networks optimization for multi-dimensional digital signal processing in IoT devices (IoT 디바이스에서 다차원 디지털 신호 처리를 위한 신경망 최적화)

  • Choi, KwonTaeg
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1165-1173
    • /
    • 2017
  • Deep learning method, which is one of the most famous machine learning algorithms, has proven its applicability in various applications and is widely used in digital signal processing. However, it is difficult to apply deep learning technology to IoT devices with limited CPU performance and memory capacity, because a large number of training samples requires a lot of memory and computation time. In particular, if the Arduino with a very small memory capacity of 2K to 8K, is used, there are many limitations in implementing the algorithm. In this paper, we propose a method to optimize the ELM algorithm, which is proved to be accurate and efficient in various fields, on Arduino board. Experiments have shown that multi-class learning is possible up to 15-dimensional data on Arduino UNO with memory capacity of 2KB and possible up to 42-dimensional data on Arduino MEGA with memory capacity of 8KB. To evaluate the experiment, we proved the effectiveness of the proposed algorithm using the data sets generated using gaussian mixture modeling and the public UCI data sets.

Research Trends on Deep Learning for Anomaly Detection of Aviation Safety (딥러닝 기반 항공안전 이상치 탐지 기술 동향)

  • Park, N.S.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.5
    • /
    • pp.82-91
    • /
    • 2021
  • This study reviews application of data-driven anomaly detection techniques to the aviation domain. Recent advances in deep learning have inspired significant anomaly detection research, and numerous methods have been proposed. However, some of these advances have not yet been explored in aviation systems. After briefly introducing aviation safety issues, data-driven anomaly detection models are introduced. Along with traditional statistical and well-established machine learning models, the state-of-the-art deep learning models for anomaly detection are reviewed. In particular, the pros and cons of hybrid techniques that incorporate an existing model and a deep model are reviewed. The characteristics and applications of deep learning models are described, and the possibility of applying deep learning methods in the aviation field is discussed.

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Optimal Machine Learning Model for Detecting Normal and Malicious Android Apps (안드로이드 정상 및 악성 앱 판별을 위한 최적합 머신러닝 기법)

  • Lee, Hyung-Woo;Lee, HanSeong
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.2
    • /
    • pp.1-10
    • /
    • 2020
  • The mobile application based on the Android platform is simple to decompile, making it possible to create malicious applications similar to normal ones, and can easily distribute the created malicious apps through the Android third party app store. In this case, the Android malicious application in the smartphone causes several problems such as leakage of personal information in the device, transmission of premium SMS, and leakage of location information and call records. Therefore, it is necessary to select a optimal model that provides the best performance among the machine learning techniques that have published recently, and provide a technique to automatically identify malicious Android apps. Therefore, in this paper, after adopting the feature engineering to Android apps on official test set, a total of four performance evaluation experiments were conducted to select the machine learning model that provides the optimal performance for Android malicious app detection.

Security tendency analysis techniques through machine learning algorithms applications in big data environments (빅데이터 환경에서 기계학습 알고리즘 응용을 통한 보안 성향 분석 기법)

  • Choi, Do-Hyeon;Park, Jung-Oh
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.269-276
    • /
    • 2015
  • Recently, with the activation of the industry related to the big data, the global security companies have expanded their scopes from structured to unstructured data for the intelligent security threat monitoring and prevention, and they show the trend to utilize the technique of user's tendency analysis for security prevention. This is because the information scope that can be deducted from the existing structured data(Quantify existing available data) analysis is limited. This study is to utilize the analysis of security tendency(Items classified purpose distinction, positive, negative judgment, key analysis of keyword relevance) applying the machine learning algorithm($Na{\ddot{i}}ve$ Bayes, Decision Tree, K-nearest neighbor, Apriori) in the big data environment. Upon the capability analysis, it was confirmed that the security items and specific indexes for the decision of security tendency could be extracted from structured and unstructured data.

Failure estimation of the composite laminates using machine learning techniques

  • Serban, Alexandru
    • Steel and Composite Structures
    • /
    • v.25 no.6
    • /
    • pp.663-670
    • /
    • 2017
  • The problem of layup optimization of the composite laminates involves a very complex multidimensional solution space which is usually non-exhaustively explored using different heuristic computational methods such as genetic algorithms (GA). To ensure the convergence to the global optimum of the applied heuristic during the optimization process it is necessary to evaluate a lot of layup configurations. As a consequence the analysis of an individual layup configuration should be fast enough to maintain the convergence time range to an acceptable level. On the other hand the mechanical behavior analysis of composite laminates for any geometry and boundary condition is very convoluted and is performed by computational expensive numerical tools such as finite element analysis (FEA). In this respect some studies propose very fast FEA models used in layup optimization. However, the lower bound of the execution time of FEA models is determined by the global linear system solving which in some complex applications can be unacceptable. Moreover, in some situation it may be highly preferred to decrease the optimization time with the cost of a small reduction in the analysis accuracy. In this paper we explore some machine learning techniques in order to estimate the failure of a layup configuration. The estimated response can be qualitative (the configuration fails or not) or quantitative (the value of the failure factor). The procedure consists of generating a population of random observations (configurations) spread across solution space and evaluating using a FEA model. The machine learning method is then trained using this population and the trained model is then used to estimate failure in the optimization process. The results obtained are very promising as illustrated with an example where the misclassification rate of the qualitative response is smaller than 2%.

Non-linearity Mitigation Method of Particulate Matter using Machine Learning Clustering Algorithms (기계학습 군집 알고리즘을 이용한 미세먼지 비선형성 완화방안)

  • Lee, Sang-gwon;Cho, Kyoung-woo;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.341-343
    • /
    • 2019
  • As the generation of high concentration particulate matter increases, much attention is focused on the prediction of particulate matter. Particulate matter refers to particulate matter less than $10{\mu}m$ diameter in the atmosphere and is affected by weather changes such as temperature, relative humidity and wind speed. Therefore, various studies have been conducted to analyze the correlation with weather information for particulate matter prediction. However, the nonlinear time series distribution of particulate matter increases the complexity of the prediction model and can lead to inaccurate predictions. In this paper, we try to mitigate the nonlinear characteristics of particulate matter by using cluster algorithm and classification algorithm of machine learning. The machine learning algorithms used are agglomerative clustering, density-based spatial clustering of applications with noise(DBSCAN).

  • PDF