• Title/Summary/Keyword: Conventional machine learning

Search Result 295, Processing Time 0.026 seconds

Machine Learning Based Neighbor Path Selection Model in a Communication Network

  • Lee, Yong-Jin
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.56-61
    • /
    • 2021
  • Neighbor path selection is to pre-select alternate routes in case geographically correlated failures occur simultaneously on the communication network. Conventional heuristic-based algorithms no longer improve solutions because they cannot sufficiently utilize historical failure information. We present a novel solution model for neighbor path selection by using machine learning technique. Our proposed machine learning neighbor path selection (ML-NPS) model is composed of five modules- random graph generation, data set creation, machine learning modeling, neighbor path prediction, and path information acquisition. It is implemented by Python with Keras on Tensorflow and executed on the tiny computer, Raspberry PI 4B. Performance evaluations via numerical simulation show that the neighbor path communication success probability of our model is better than that of the conventional heuristic by 26% on the average.

An Approach to Applying Multiple Linear Regression Models by Interlacing Data in Classifying Similar Software

  • Lim, Hyun-il
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.268-281
    • /
    • 2022
  • The development of information technology is bringing many changes to everyday life, and machine learning can be used as a technique to solve a wide range of real-world problems. Analysis and utilization of data are essential processes in applying machine learning to real-world problems. As a method of processing data in machine learning, we propose an approach based on applying multiple linear regression models by interlacing data to the task of classifying similar software. Linear regression is widely used in estimation problems to model the relationship between input and output data. In our approach, multiple linear regression models are generated by training on interlaced feature data. A combination of these multiple models is then used as the prediction model for classifying similar software. Experiments are performed to evaluate the proposed approach as compared to conventional linear regression, and the experimental results show that the proposed method classifies similar software more accurately than the conventional model. We anticipate the proposed approach to be applied to various kinds of classification problems to improve the accuracy of conventional linear regression.

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

Improvement of Activity Recognition Based on Learning Model of AI and Wearable Motion Sensors (웨어러블 동작센서와 인공지능 학습모델 기반에서 행동인지의 개선)

  • Ahn, Junguk;Kang, Un Gu;Lee, Young Ho;Lee, Byung Mun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.982-990
    • /
    • 2018
  • In recent years, many wearable devices and mobile apps related to life care have been developed, and a service for measuring the movement during walking and showing the amount of exercise has been provided. However, they do not measure walking in detail, so there may be errors in the total calorie consumption. If the user's behavior is measured by a multi-axis sensor and learned by a machine learning algorithm to recognize the kind of behavior, the detailed operation of walking can be autonomously distinguished and the total calorie consumption can be calculated more than the conventional method. In order to verify this, we measured activities and created a model using a machine learning algorithm. As a result of the comparison experiment, it was confirmed that the average accuracy was 12.5% or more higher than that of the conventional method. Also, in the measurement of the momentum, the calorie consumption accuracy is more than 49.53% than that of the conventional method. If the activity recognition is performed using the wearable device and the machine learning algorithm, the accuracy can be improved and the energy consumption calculation accuracy can be improved.

Optimization of Fuzzy Learning Machine by Using Particle Swarm Optimization (PSO 알고리즘을 이용한 퍼지 Extreme Learning Machine 최적화)

  • Roh, Seok-Beom;Wang, Jihong;Kim, Yong-Soo;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.87-92
    • /
    • 2016
  • In this paper, optimization technique such as particle swarm optimization was used to optimize the parameters of fuzzy Extreme Learning Machine. While the learning speed of conventional neural networks is very slow, that of Extreme Learning Machine is very fast. Fuzzy Extreme Learning Machine is composed of the Extreme Learning Machine with very fast learning speed and fuzzy logic which can represent the linguistic information of the field experts. The general sigmoid function is used for the activation function of Extreme Learning Machine. However, the activation function of Fuzzy Extreme Learning Machine is the membership function which is defined in the procedure of fuzzy C-Means clustering algorithm. We optimize the parameters of the membership functions by using optimization technique such as Particle Swarm Optimization. In order to validate the classification capability of the proposed classifier, we make several experiments with the various machine learning datas.

Performance Comparison of Machine Learning Algorithms for Received Signal Strength-Based Indoor LOS/NLOS Classification of LTE Signals

  • Lee, Halim;Seo, Jiwon
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.4
    • /
    • pp.361-368
    • /
    • 2022
  • An indoor navigation system that utilizes long-term evolution (LTE) signals has the benefit of no additional infrastructure installation expenses and low base station database management costs. Among the LTE signal measurements, received signal strength (RSS) is particularly appealing because it can be easily obtained with mobile devices. Propagation channel models can be used to estimate the position of mobile devices with RSS. However, conventional channel models have a shortcoming in that they do not discriminate between line-of-sight (LOS) and non-line-of-sight (NLOS) conditions of the received signal. Accordingly, a previous study has suggested separated LOS and NLOS channel models. However, a method for determining LOS and NLOS conditions was not devised. In this study, a machine learning-based LOS/NLOS classification method using RSS measurements is developed. We suggest several machine-learning features and evaluate various machine-learning algorithms. As an indoor experimental result, up to 87.5% classification accuracy was achieved with an ensemble algorithm. Furthermore, the range estimation accuracy with an average error of 13.54 m was demonstrated, which is a 25.3% improvement over the conventional channel model.

Research Trends in Wi-Fi Performance Improvement in Coexistence Networks with Machine Learning (기계학습을 활용한 이종망에서의 Wi-Fi 성능 개선 연구 동향 분석)

  • Kang, Young-myoung
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.51-59
    • /
    • 2022
  • Machine learning, which has recently innovatively developed, has become an important technology that can solve various optimization problems. In this paper, we introduce the latest research papers that solve the problem of channel sharing in heterogeneous networks using machine learning, analyze the characteristics of mainstream approaches, and present a guide to future research directions. Existing studies have generally adopted Q-learning since it supports fast learning both on online and offline environment. On the contrary, conventional studies have either not considered various coexistence scenarios or lacked consideration for the location of machine learning controllers that can have a significant impact on network performance. One of the powerful ways to overcome these disadvantages is to selectively use a machine learning algorithm according to changes in network environment based on the logical network architecture for machine learning proposed by ITU.

Machine Learning Model to Predict Osteoporotic Spine with Hounsfield Units on Lumbar Computed Tomography

  • Nam, Kyoung Hyup;Seo, Il;Kim, Dong Hwan;Lee, Jae Il;Choi, Byung Kwan;Han, In Ho
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.442-449
    • /
    • 2019
  • Objective : Bone mineral density (BMD) is an important consideration during fusion surgery. Although dual X-ray absorptiometry is considered as the gold standard for assessing BMD, quantitative computed tomography (QCT) provides more accurate data in spine osteoporosis. However, QCT has the disadvantage of additional radiation hazard and cost. The present study was to demonstrate the utility of artificial intelligence and machine learning algorithm for assessing osteoporosis using Hounsfield units (HU) of preoperative lumbar CT coupling with data of QCT. Methods : We reviewed 70 patients undergoing both QCT and conventional lumbar CT for spine surgery. The T-scores of 198 lumbar vertebra was assessed in QCT and the HU of vertebral body at the same level were measured in conventional CT by the picture archiving and communication system (PACS) system. A multiple regression algorithm was applied to predict the T-score using three independent variables (age, sex, and HU of vertebral body on conventional CT) coupling with T-score of QCT. Next, a logistic regression algorithm was applied to predict osteoporotic or non-osteoporotic vertebra. The Tensor flow and Python were used as the machine learning tools. The Tensor flow user interface developed in our institute was used for easy code generation. Results : The predictive model with multiple regression algorithm estimated similar T-scores with data of QCT. HU demonstrates the similar results as QCT without the discordance in only one non-osteoporotic vertebra that indicated osteoporosis. From the training set, the predictive model classified the lumbar vertebra into two groups (osteoporotic vs. non-osteoporotic spine) with 88.0% accuracy. In a test set of 40 vertebrae, classification accuracy was 92.5% when the learning rate was 0.0001 (precision, 0.939; recall, 0.969; F1 score, 0.954; area under the curve, 0.900). Conclusion : This study is a simple machine learning model applicable in the spine research field. The machine learning model can predict the T-score and osteoporotic vertebrae solely by measuring the HU of conventional CT, and this would help spine surgeons not to under-estimate the osteoporotic spine preoperatively. If applied to a bigger data set, we believe the predictive accuracy of our model will further increase. We propose that machine learning is an important modality of the medical research field.

Study on Automatic Bug Triage using Deep Learning (딥 러닝을 이용한 버그 담당자 자동 배정 연구)

  • Lee, Sun-Ro;Kim, Hye-Min;Lee, Chan-Gun;Lee, Ki-Seong
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1156-1164
    • /
    • 2017
  • Existing studies on automatic bug triage were mostly used the method of designing the prediction system based on the machine learning algorithm. Therefore, it can be said that applying a high-performance machine learning model is the core of the performance of the automatic bug triage system. In the related research, machine learning models that have high performance are mainly used, such as SVM and Naïve Bayes. In this paper, we apply Deep Learning, which has recently shown good performance in the field of machine learning, to automatic bug triage and evaluate its performance. Experimental results show that the Deep Learning based Bug Triage system achieves 48% accuracy in active developer experiments, un improvement of up to 69% over than conventional machine learning techniques.

Comparative Application of Various Machine Learning Techniques for Lithology Predictions (다양한 기계학습 기법의 암상예측 적용성 비교 분석)

  • Jeong, Jina;Park, Eungyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.21 no.3
    • /
    • pp.21-34
    • /
    • 2016
  • In the present study, we applied various machine learning techniques comparatively for prediction of subsurface structures based on multiple secondary information (i.e., well-logging data). The machine learning techniques employed in this study are Naive Bayes classification (NB), artificial neural network (ANN), support vector machine (SVM) and logistic regression classification (LR). As an alternative model, conventional hidden Markov model (HMM) and modified hidden Markov model (mHMM) are used where additional information of transition probability between primary properties is incorporated in the predictions. In the comparisons, 16 boreholes consisted with four different materials are synthesized, which show directional non-stationarity in upward and downward directions. Futhermore, two types of the secondary information that is statistically related to each material are generated. From the comparative analysis with various case studies, the accuracies of the techniques become degenerated with inclusion of additive errors and small amount of the training data. For HMM predictions, the conventional HMM shows the similar accuracies with the models that does not relies on transition probability. However, the mHMM consistently shows the highest prediction accuracy among the test cases, which can be attributed to the consideration of geological nature in the training of the model.