• Title/Summary/Keyword: machine data

Search Result 6,279, Processing Time 0.031 seconds

PubMiner: Machine Learning-based Text Mining for Biomedical Information Analysis

  • Eom, Jae-Hong;Zhang, Byoung-Tak
    • Genomics & Informatics
    • /
    • v.2 no.2
    • /
    • pp.99-106
    • /
    • 2004
  • In this paper we introduce PubMiner, an intelligent machine learning based text mining system for mining biological information from the literature. PubMiner employs natural language processing techniques and machine learning based data mining techniques for mining useful biological information such as protein­protein interaction from the massive literature. The system recognizes biological terms such as gene, protein, and enzymes and extracts their interactions described in the document through natural language processing. The extracted interactions are further analyzed with a set of features of each entity that were collected from the related public databases to infer more interactions from the original interactions. An inferred interaction from the interaction analysis and native interaction are provided to the user with the link of literature sources. The performance of entity and interaction extraction was tested with selected MEDLINE abstracts. The evaluation of inference proceeded using the protein interaction data of S. cerevisiae (bakers yeast) from MIPS and SGD.

A Residual Power Estimation Scheme Using Machine Learning in Wireless Sensor Networks (센서 네트워크에서 기계학습을 사용한 잔류 전력 추정 방안)

  • Bae, Shi-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.67-74
    • /
    • 2021
  • As IoT(Internet Of Things) devices like a smart sensor have constrained power sources, a power strategy is critical in WSN(Wireless Sensor Networks). Therefore, it is necessary to figure out the residual power of each sensor node for managing power strategies in WSN, which, however, requires additional data transmission, leading to more power consumption. In this paper, a residual power estimation method was proposed, which uses ignorantly small amount of power consumption in the resource-constrained wireless networks including WSN. A residual power prediction is possible with the least data transmission by using Machine Learning method with some training data in this proposal. The performance of the proposed scheme was evaluated by machine learning method, simulation, and analysis.

A Study on the Mileage Prediction of Urban Railway Vehicle using Wheel Diameter/Flange change Data and Machine Learning Techniques (도시철도차량 주행차륜의 직경/플랜지 변화 데이터와 머신러닝 기법을 활용한 주행거리 예측 연구)

  • Hak Rak Noh;Won Sik Lim
    • Journal of the Korean Society of Safety
    • /
    • v.38 no.4
    • /
    • pp.1-7
    • /
    • 2023
  • The steel wheels of urban railway vehicles gather a lot of data through regular measurements during maintenance. However, limited research has been carried out utilizing this data, resulting in difficulties predicting the maintenance period. This paper studied a machine learning model suitable for mileage prediction by studying the characteristics of mileage change according to diameter and flange thickness changes. The results of this study indicate that the larger the diameter, the longer the travel distance, and the longest flange thickness is at 30 mm, which gradually shortened at other times. As a result of research on the machine learning prediction model, it was confirmed that the random forest model is the optimal model with a high coefficient of determination and a low root mean square error.

IoT-based systemic lupus erythematosus prediction model using hybrid genetic algorithm integrated with ANN

  • Edison Prabhu K;Surendran D
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.594-602
    • /
    • 2023
  • Internet of things (IoT) is commonly employed to detect different kinds of diseases in the health sector. Systemic lupus erythematosus (SLE) is an autoimmune illness that occurs when the body's immune system attacks its own connective tissues and organs. Because of the complicated interconnections between illness trigger exposure levels across time, humans have trouble predicting SLE symptom severity levels. An effective automated machine learning model that intakes IoT data was created to forecast SLE symptoms to solve this issue. IoT has several advantages in the healthcare industry, including interoperability, information exchange, machine-to-machine networking, and data transmission. An SLE symptom-predicting machine learning model was designed by integrating the hybrid marine predator algorithm and atom search optimization with an artificial neural network. The network is trained by the Gene Expression Omnibus dataset as input, and the patients' data are used as input to predict symptoms. The experimental results demonstrate that the proposed model's accuracy is higher than state-of-the-art prediction models at approximately 99.70%.

Semi-supervised regression based on support vector machine

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.2
    • /
    • pp.447-454
    • /
    • 2014
  • In many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are fairly expensive to obtain. Therefore semi-supervised learning algorithms have attracted much attentions. However, previous research mainly focuses on classication problems. In this paper, a semi-supervised regression method based on support vector regression (SVR) formulation that is proposed. The estimator is easily obtained via the dual formulation of the optimization problem. The experimental results with simulated and real data suggest superior performance of the our proposed method compared with standard SVR.

Predicting Surgical Complications in Adult Patients Undergoing Anterior Cervical Discectomy and Fusion Using Machine Learning

  • Arvind, Varun;Kim, Jun S.;Oermann, Eric K.;Kaji, Deepak;Cho, Samuel K.
    • Neurospine
    • /
    • v.15 no.4
    • /
    • pp.329-337
    • /
    • 2018
  • Objective: Machine learning algorithms excel at leveraging big data to identify complex patterns that can be used to aid in clinical decision-making. The objective of this study is to demonstrate the performance of machine learning models in predicting postoperative complications following anterior cervical discectomy and fusion (ACDF). Methods: Artificial neural network (ANN), logistic regression (LR), support vector machine (SVM), and random forest decision tree (RF) models were trained on a multicenter data set of patients undergoing ACDF to predict surgical complications based on readily available patient data. Following training, these models were compared to the predictive capability of American Society of Anesthesiologists (ASA) physical status classification. Results: A total of 20,879 patients were identified as having undergone ACDF. Following exclusion criteria, patients were divided into 14,615 patients for training and 6,264 for testing data sets. ANN and LR consistently outperformed ASA physical status classification in predicting every complication (p < 0.05). The ANN outperformed LR in predicting venous thromboembolism, wound complication, and mortality (p < 0.05). The SVM and RF models were no better than random chance at predicting any of the postoperative complications (p < 0.05). Conclusion: ANN and LR algorithms outperform ASA physical status classification for predicting individual postoperative complications. Additionally, neural networks have greater sensitivity than LR when predicting mortality and wound complications. With the growing size of medical data, the training of machine learning on these large datasets promises to improve risk prognostication, with the ability of continuously learning making them excellent tools in complex clinical scenarios.

Machine learning application for predicting the strawberry harvesting time

  • Yang, Mi-Hye;Nam, Won-Ho;Kim, Taegon;Lee, Kwanho;Kim, Younghwa
    • Korean Journal of Agricultural Science
    • /
    • v.46 no.2
    • /
    • pp.381-393
    • /
    • 2019
  • A smart farm is a system that combines information and communication technology (ICT), internet of things (IoT), and agricultural technology that enable a farm to operate with minimal labor and to automatically control of a greenhouse environment. Machine learning based on recently data-driven techniques has emerged with big data technologies and high-performance computing to create opportunities to quantify data intensive processes in agricultural operational environments. This paper presents research on the application of machine learning technology to diagnose the growth status of crops and predicting the harvest time of strawberries in a greenhouse according to image processing techniques. To classify the growth stages of the strawberries, we used object inference and detection with machine learning model based on deep learning neural networks and TensorFlow. The classification accuracy was compared based on the training data volume and training epoch. As a result, it was able to classify with an accuracy of over 90% with 200 training images and 8,000 training steps. The detection and classification of the strawberry maturities could be identified with an accuracy of over 90% at the mature and over mature stages of the strawberries. Concurrently, the experimental results are promising, and they show that this approach can be applied to develop a machine learning model for predicting the strawberry harvesting time and can be used to provide key decision support information to both farmers and policy makers about optimal harvest times and harvest planning.

Performance Comparison of Machine Learning Models for Grid-Based Flood Risk Mapping - Focusing on the Case of Typhoon Chaba in 2016 - (격자 기반 침수위험지도 작성을 위한 기계학습 모델별 성능 비교 연구 - 2016 태풍 차바 사례를 중심으로 -)

  • Jihye Han;Changjae Kwak;Kuyoon Kim;Miran Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_2
    • /
    • pp.771-783
    • /
    • 2023
  • This study aims to compare the performance of each machine learning model for preparing a grid-based disaster risk map related to flooding in Jung-gu, Ulsan, for Typhoon Chaba which occurred in 2016. Dynamic data such as rainfall and river height, and static data such as building, population, and land cover data were used to conduct a risk analysis of flooding disasters. The data were constructed as 10 m-sized grid data based on the national point number, and a sample dataset was constructed using the risk value calculated for each grid as a dependent variable and the value of five influencing factors as an independent variable. The total number of sample datasets is 15,910, and the training, verification, and test datasets are randomly extracted at a 6:2:2 ratio to build a machine-learning model. Machine learning used random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN) techniques, and prediction accuracy by the model was found to be excellent in the order of SVM (91.05%), RF (83.08%), and KNN (76.52%). As a result of deriving the priority of influencing factors through the RF model, it was confirmed that rainfall and river water levels greatly influenced the risk.

Landslide susceptibility assessment using feature selection-based machine learning models

  • Liu, Lei-Lei;Yang, Can;Wang, Xiao-Mi
    • Geomechanics and Engineering
    • /
    • v.25 no.1
    • /
    • pp.1-16
    • /
    • 2021
  • Machine learning models have been widely used for landslide susceptibility assessment (LSA) in recent years. The large number of inputs or conditioning factors for these models, however, can reduce the computation efficiency and increase the difficulty in collecting data. Feature selection is a good tool to address this problem by selecting the most important features among all factors to reduce the size of the input variables. However, two important questions need to be solved: (1) how do feature selection methods affect the performance of machine learning models? and (2) which feature selection method is the most suitable for a given machine learning model? This paper aims to address these two questions by comparing the predictive performance of 13 feature selection-based machine learning (FS-ML) models and 5 ordinary machine learning models on LSA. First, five commonly used machine learning models (i.e., logistic regression, support vector machine, artificial neural network, Gaussian process and random forest) and six typical feature selection methods in the literature are adopted to constitute the proposed models. Then, fifteen conditioning factors are chosen as input variables and 1,017 landslides are used as recorded data. Next, feature selection methods are used to obtain the importance of the conditioning factors to create feature subsets, based on which 13 FS-ML models are constructed. For each of the machine learning models, a best optimized FS-ML model is selected according to the area under curve value. Finally, five optimal FS-ML models are obtained and applied to the LSA of the studied area. The predictive abilities of the FS-ML models on LSA are verified and compared through the receive operating characteristic curve and statistical indicators such as sensitivity, specificity and accuracy. The results showed that different feature selection methods have different effects on the performance of LSA machine learning models. FS-ML models generally outperform the ordinary machine learning models. The best FS-ML model is the recursive feature elimination (RFE) optimized RF, and RFE is an optimal method for feature selection.

Design of Anomaly Detection System Based on Big Data in Internet of Things (빅데이터 기반의 IoT 이상 장애 탐지 시스템 설계)

  • Na, Sung Il;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.19 no.2
    • /
    • pp.377-383
    • /
    • 2018
  • Internet of Things (IoT) is producing various data as the smart environment comes. The IoT data collection is used as important data to judge systems's status. Therefore, it is important to monitor the anomaly state of the sensor in real-time and to detect anomaly data. However, it is necessary to convert the IoT data into a normalized data structure for anomaly detection because of the variety of data structures and protocols. Thus, we can expect a good quality effect such as accurate analysis data quality and service quality. In this paper, we propose an anomaly detection system based on big data from collected sensor data. The proposed system is applied to ensure anomaly detection and keep data quality. In addition, we applied the machine learning model of support vector machine using anomaly detection based on time-series data. As a result, machine learning using preprocessed data was able to accurately detect and predict anomaly.