• Title/Summary/Keyword: machine learning applications

Search Result 538, Processing Time 0.023 seconds

Deep Learning Frameworks for Cervical Mobilization Based on Website Images

  • Choi, Wansuk;Heo, Seoyoon
    • Journal of International Academy of Physical Therapy Research
    • /
    • v.12 no.1
    • /
    • pp.2261-2266
    • /
    • 2021
  • Background: Deep learning related research works on website medical images have been actively conducted in the field of health care, however, articles related to the musculoskeletal system have been introduced insufficiently, deep learning-based studies on classifying orthopedic manual therapy images would also just be entered. Objectives: To create a deep learning model that categorizes cervical mobilization images and establish a web application to find out its clinical utility. Design: Research and development. Methods: Three types of cervical mobilization images (central posteroanterior (CPA) mobilization, unilateral posteroanterior (UPA) mobilization, and anteroposterior (AP) mobilization) were obtained using functions of 'Download All Images' and a web crawler. Unnecessary images were filtered from 'Auslogics Duplicate File Finder' to obtain the final 144 data (CPA=62, UPA=46, AP=36). Training classified into 3 classes was conducted in Teachable Machine. The next procedures, the trained model source was uploaded to the web application cloud integrated development environment (https://ide.goorm.io/) and the frame was built. The trained model was tested in three environments: Teachable Machine File Upload (TMFU), Teachable Machine Webcam (TMW), and Web Service webcam (WSW). Results: In three environments (TMFU, TMW, WSW), the accuracy of CPA mobilization images was 81-96%. The accuracy of the UPA mobilization image was 43~94%, and the accuracy deviation was greater than that of CPA. The accuracy of the AP mobilization image was 65-75%, and the deviation was not large compared to the other groups. In the three environments, the average accuracy of CPA was 92%, and the accuracy of UPA and AP was similar up to 70%. Conclusion: This study suggests that training of images of orthopedic manual therapy using machine learning open software is possible, and that web applications made using this training model can be used clinically.

From Machine Learning Algorithms to Superior Customer Experience: Business Implications of Machine Learning-Driven Data Analytics in the Hospitality Industry

  • Egor Cherenkov;Vlad Benga;Minwoo Lee;Neil Nandwani;Kenan Raguin;Marie Clementine Sueur;Guohao Sun
    • Journal of Smart Tourism
    • /
    • v.4 no.2
    • /
    • pp.5-14
    • /
    • 2024
  • This study explores the transformative potential of machine learning (ML) and ML-driven data analytics in the hospitality industry. It provides a comprehensive overview of this emerging method, from explaining ML's origins to introducing the evolution of ML-driven data analytics in the hospitality industry. The present study emphasizes the shift embodied in ML, moving from explicit programming towards a self-learning, adaptive approach refined over time through big data. Meanwhile, social media analytics has progressed from simplistic metrics deriving nuanced qualitative insights into consumer behavior as an industry-specific example. Additionally, this study explores innovative applications of these innovative technologies in the hospitality sector, whether in demand forecasting, personalized marketing, predictive maintenance, etc. The study also emphasizes the integration of ML and social media analytics, discussing the implications like enhanced customer personalization, real-time decision-making capabilities, optimized marketing campaigns, and improved fraud detection. In conclusion, ML-driven hospitality data analytics have become indispensable in the strategic and operation machinery of contemporary hospitality businesses. It projects these technologies' continued significance in propelling data-centric advancements across the industry.

Dynamic Nonlinear Prediction Model of Univariate Hydrologic Time Series Using the Support Vector Machine and State-Space Model (Support Vector Machine과 상태공간모형을 이용한 단변량 수문 시계열의 동역학적 비선형 예측모형)

  • Kwon, Hyun-Han;Moon, Young-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.279-289
    • /
    • 2006
  • The reconstruction of low dimension nonlinear behavior from the hydrologic time series has been an active area of research in the last decade. In this study, we present the applications of a powerful state space reconstruction methodology using the method of Support Vector Machines (SVM) to the Great Salt Lake (GSL) volume. SVMs are machine learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space. SVMs are optimized by minimizing a bound on a generalized error (risk) measure, rather than just the mean square error over a training set. The utility of this SVM regression approach is demonstrated through applications to the short term forecasts of the biweekly GSL volume. The SVM based reconstruction is used to develop time series forecasts for multiple lead times ranging from the period of two weeks to several months. The reliability of the algorithm in learning and forecasting the dynamics is tested using split sample sensitivity analyses, with a particular interest in forecasting extreme states. Unlike previously reported methodologies, SVMs are able to extract the dynamics using only a few past observed data points (Support Vectors, SV) out of the training examples. Considering statistical measures, the prediction model based on SVM demonstrated encouraging and promising results in a short-term prediction. Thus, the SVM method presented in this study suggests a competitive methodology for the forecast of hydrologic time series.

A Survey on Deep Reinforcement Learning Libraries (심층강화학습 라이브러리 기술동향)

  • Shin, S.J.;Cho, C.L.;Jeon, H.S.;Yoon, S.H.;Kim, T.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.87-99
    • /
    • 2019
  • Reinforcement learning is a type of machine learning paradigm that forces agents to repeat the observation-action-reward process to assess and predict the values of possible future action sequences. This allows the agents to incrementally reinforce the desired behavior for a given observation. Thanks to the recent advancements of deep learning, reinforcement learning has evolved into deep reinforcement learning that introduces promising results in various control and optimization domains, such as games, robotics, autonomous vehicles, computing, industrial control, and so on. In addition to this trend, a number of programming libraries have been developed for importing deep reinforcement learning into a variety of applications. In this article, we briefly review and summarize 10 representative deep reinforcement learning libraries and compare them from a development project perspective.

Evaluating Efficiency of Life Insurance Companies Utilizing DEA and Machine Learning (자료봉합분석과 기계학습을 이용한 생명보험회사의 효율성 평가)

  • Hong, Han-Kook;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.7 no.1
    • /
    • pp.63-79
    • /
    • 2001
  • Data Envelopment Analysis(DEA), a non-parametric productivity analysis tool, has become an accepted approach for assessing efficiency in a wide range of fields. Despite of its extensive applications and merits, some features of DEA remain bothersome. DEA offers no guideline about to which direction relatively inefficient DMUs improve since a reference set of an inefficient DMU, several efficient DMUs, hardly provides a stepwise path for improving the efficiency of the inefficient DMU. In this paper, we aim to show that DEA can be used to evaluate the efficiency of life insurance companies while overcoming its limitation with the aids of machine learning methods.

  • PDF

Construction of Korean Knowledge Base Based on Machine Learning from Wikipedia (위키백과로부터 기계학습 기반 한국어 지식베이스 구축)

  • Jeong, Seok-won;Choi, Maengsik;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.1065-1070
    • /
    • 2015
  • The performance of many natural language processing applications depends on the knowledge base as a major resource. WordNet, YAGO, Cyc, and BabelNet have been extensively used as knowledge bases in English. In this paper, we propose a method to construct a YAGO-style knowledge base automatically for Korean (hereafter, K-YAGO) from Wikipedia and YAGO. The proposed system constructs an initial K-YAGO simply by matching YAGO to info-boxes in Wikipedia. Then, the initial K-YAGO is expanded through the use of a machine learning technique. Experiments with the initial K-YAGO shows that the proposed system has a precision of 0.9642. In the experiments with the expanded part of K-YAGO, an accuracy of 0.9468 was achieved with an average macro F1-measure of 0.7596.

An Intelligent MAC Protocol Selection Method based on Machine Learning in Wireless Sensor Networks

  • Qiao, Mu;Zhao, Haitao;Huang, Shengchun;Zhou, Li;Wang, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5425-5448
    • /
    • 2018
  • Wireless sensor network has been widely used in Internet of Things (IoT) applications to support large and dense networks. As sensor nodes are usually tiny and provided with limited hardware resources, the existing multiple access methods, which involve high computational complexity to preserve the protocol performance, is not available under such a scenario. In this paper, we propose an intelligent Medium Access Control (MAC) protocol selection scheme based on machine learning in wireless sensor networks. We jointly consider the impact of inherent behavior and external environments to deal with the application limitation problem of the single type MAC protocol. This scheme can benefit from the combination of the competitive protocols and non-competitive protocols, and help the network nodes to select the MAC protocol that best suits the current network condition. Extensive simulation results validate our work, and it also proven that the accuracy of the proposed MAC protocol selection strategy is higher than the existing work.

Classification of Inverter Failure by Using Big Data and Machine Learning (빅데이터와 머신러닝 기반의 인버터 고장 분류)

  • Kim, Min-Seop;Shifat, Tanvir Alam;Hur, Jang-Wook
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.3
    • /
    • pp.1-7
    • /
    • 2021
  • With the advent of industry 4.0, big data and machine learning techniques are being widely adopted in the maintenance domain. Inverters are widely used in many engineering applications. However, overloading and complex operation conditions may lead to various failures in inverters. In this study, failure mode effect analysis was performed on inverters and voltages collected to investigate the over-voltage effect on capacitors. Several features were extracted from the collected sensor data, which indicated the health state of the inverter. Based on this correlation, the best features were selected for classification. Moreover, random forest classifiers were used to classify the healthy and faulty states of inverters. Different performance metrics were computed, and the classifiers' performance was evaluated in terms of various health features.

Wellness Prediction in Diabetes Mellitus Risks Via Machine Learning Classifiers

  • Saravanakumar M, Venkatesh;Sabibullah, M.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.203-208
    • /
    • 2022
  • The occurrence of Type 2 Diabetes Mellitus (T2DM) is hoarding globally. All kinds of Diabetes Mellitus is controlled to disrupt over 415 million grownups worldwide. It was the seventh prime cause of demise widespread with a measured 1.6 million deaths right prompted by diabetes during 2016. Over 90% of diabetes cases are T2DM, with the utmost persons having at smallest one other chronic condition in UK. In valuation of contemporary applications of Big Data (BD) to Diabetes Medicare by sighted its upcoming abilities, it is compulsory to transmit out a bottomless revision over foremost theoretical literatures. The long-term growth in medicine and, in explicit, in the field of "Diabetology", is powerfully encroached to a sequence of differences and inventions. The medical and healthcare data from varied bases like analysis and treatment tactics which assistances healthcare workers to guess the actual perceptions about the development of Diabetes Medicare measures accessible by them. Apache Spark extracts "Resilient Distributed Dataset (RDD)", a vital data structure distributed finished a cluster on machines. Machine Learning (ML) deals a note-worthy method for building elegant and automatic algorithms. ML library involving of communal ML algorithms like Support Vector Classification and Random Forest are investigated in this projected work by using Jupiter Notebook - Python code, where significant quantity of result (Accuracy) is carried out by the models.

Compositional Feature Selection and Its Effects on Bandgap Prediction by Machine Learning (기계학습을 이용한 밴드갭 예측과 소재의 조성기반 특성인자의 효과)

  • Chunghee Nam
    • Korean Journal of Materials Research
    • /
    • v.33 no.4
    • /
    • pp.164-174
    • /
    • 2023
  • The bandgap characteristics of semiconductor materials are an important factor when utilizing semiconductor materials for various applications. In this study, based on data provided by AFLOW (Automatic-FLOW for Materials Discovery), the bandgap of a semiconductor material was predicted using only the material's compositional features. The compositional features were generated using the python module of 'Pymatgen' and 'Matminer'. Pearson's correlation coefficients (PCC) between the compositional features were calculated and those with a correlation coefficient value larger than 0.95 were removed in order to avoid overfitting. The bandgap prediction performance was compared using the metrics of R2 score and root-mean-squared error. By predicting the bandgap with randomforest and xgboost as representatives of the ensemble algorithm, it was found that xgboost gave better results after cross-validation and hyper-parameter tuning. To investigate the effect of compositional feature selection on the bandgap prediction of the machine learning model, the prediction performance was studied according to the number of features based on feature importance methods. It was found that there were no significant changes in prediction performance beyond the appropriate feature. Furthermore, artificial neural networks were employed to compare the prediction performance by adjusting the number of features guided by the PCC values, resulting in the best R2 score of 0.811. By comparing and analyzing the bandgap distribution and prediction performance according to the material group containing specific elements (F, N, Yb, Eu, Zn, B, Si, Ge, Fe Al), various information for material design was obtained.