• Title/Summary/Keyword: machine data

Search Result 6,279, Processing Time 0.029 seconds

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

Machine Learning Perspective Gene Optimization for Efficient Induction Machine Design

  • Selvam, Ponmurugan Panneer;Narayanan, Rengarajan
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.3
    • /
    • pp.1202-1211
    • /
    • 2018
  • In this paper, induction machine operation efficiency and torque is improved using Machine Learning based Gene Optimization (ML-GO) Technique is introduced. Optimized Genetic Algorithm (OGA) is used to select the optimal induction machine data. In OGA, selection, crossover and mutation process is carried out to find the optimal electrical machine data for induction machine design. Initially, many number of induction machine data are given as input for OGA. Then, fitness value is calculated for all induction machine data to find whether the criterion is satisfied or not through fitness function (i.e., objective function such as starting to full load torque ratio, rotor current, power factor and maximum flux density of stator and rotor teeth). When the criterion is not satisfied, annealed selection approach in OGA is used to move the selection criteria from exploration to exploitation to attain the optimal solution (i.e., efficient machine data). After the selection process, two point crossovers is carried out to select two crossover points within a chromosomes (i.e., design variables) and then swaps two parent's chromosomes for producing two new offspring. Finally, Adaptive Levy Mutation is used in OGA to select any value in random manner and gets mutated to obtain the optimal value. This process gets iterated till finding the optimal value for induction machine design. Experimental evaluation of ML-GO technique is carried out with performance metrics such as torque, rotor current, induction machine operation efficiency and rotor power factor compared to the state-of-the-art works.

Analysis on Trends of No-Code Machine Learning Tools

  • Yo-Seob, Lee;Phil-Joo, Moon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.412-419
    • /
    • 2022
  • The amount of digital text data is growing exponentially, and many machine learning solutions are being used to monitor and manage this data. Artificial intelligence and machine learning are used in many areas of our daily lives, but the underlying processes and concepts are not easy for most people to understand. At a time when many experts are needed to run a machine learning solution, no-code machine learning tools are a good solution. No-code machine learning tools is a platform that enables machine learning functions to be performed without engineers or developers. The latest No-Code machine learning tools run in your browser, so you don't need to install any additional software, and the simple GUI interface makes them easy to use. Using these platforms can save you a lot of money and time because there is less skill and less code to write. No-Code machine learning tools make it easy to understand artificial intelligence and machine learning. In this paper, we examine No-Code machine learning tools and compare their features.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Generation of freeform Surface using Measured Data on the Machine Tool (공작기계상에서의 측정데이터를 이용한 자유곡면 생성)

  • 이세복
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1998.10a
    • /
    • pp.13-18
    • /
    • 1998
  • The assessment of machined surface is difficult because the freeform surface must be evaluated by surface fairness as well as dimensional accuracy. In this paper, the methodology of freeform surface generation using measured data on the machine tool is presented. The reliability of measured points data is obtained by measuring error compensation. The compensated data are formulated through Non-uniform G-spline surface modeling. In order to improve the surface fairness, the generated model si smoothened by parameterization The validity and usefulness of the proposed method are examined through computer simulation and experiments on the machine tool.

  • PDF

Two-Phase Approach for Machine-Part Grouping Using Non-binary Production Data-Based Part-Machine Incidence Matrix (수리계획법의 활용 분야)

  • Won, You-Dong;Won, You-Kyung
    • Korean Management Science Review
    • /
    • v.24 no.1
    • /
    • pp.91-111
    • /
    • 2007
  • In this paper an effective two-phase approach adopting modified p-median mathematical model is proposed for grouping machines and parts in cellular manufacturing(CM). Unlike the conventional methods allowing machines and parts to be improperly assigned to cells and families, the proposed approach seeks to find the proper block diagonal solution where all the machines and parts are properly assigned to their most associated cells and families in term of the actual machine processing and part moves. Phase 1 uses the modified p-median formulation adopting new inter-machine similarity coefficient based on the non-binary production data-based part-machine incidence matrix(PMIM) that reflects both the operation sequences and production volumes for the parts to find machine cells. Phase 2 apollos iterative reassignment procedure to minimize inter-cell part moves and maximize within-cell machine utilization by reassigning improperly assigned machines and parts to their most associated cells and families. Computational experience with the data sets available on literature shows the proposed approach yields good-quality proper block diagonal solution.

Kernel Machine for Poisson Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.767-772
    • /
    • 2007
  • A kernel machine is proposed as an estimating procedure for the linear and nonlinear Poisson regression, which is based on the penalized negative log-likelihood. The proposed kernel machine provides the estimate of the mean function of the response variable, where the canonical parameter is related to the input vector in a nonlinear form. The generalized cross validation(GCV) function of MSE-type is introduced to determine hyperparameters which affect the performance of the machine. Experimental results are then presented which indicate the performance of the proposed machine.

  • PDF

Minimizing Machine-to-Machine Data losses on the Offshore Moored Buoy with Software Approach (소프트웨어방식을 이용한 근해 정박 부이의 기계간의 데이터손실의 최소화)

  • Young, Tan She;Park, Soo-Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.7
    • /
    • pp.1003-1010
    • /
    • 2013
  • In this paper, TCP/IP based Machine-to-Machine (M2M) communication uses CDMA/GSM network for data communication. This communication method is widely used by offshore moored buoy for data transmission back to the system server. Due to weather and signal coverage, the TCP/IP M2M communication often experiences transmission failure and causing data losses in the server. Data losses are undesired especially for meteorological and oceanographic analysis. This paper discusses a software approach to minimize M2M data losses by handling transmission failure and re-attempt which meant to transmit the data for recovery. This implementation was tested for its performance on a meteorological buoy placed offshore.

Sensor Data Collection & Refining System for Machine Learning-Based Cloud (기계학습 기반의 클라우드를 위한 센서 데이터 수집 및 정제 시스템)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.165-170
    • /
    • 2021
  • Machine learning has recently been applied to research in most areas. This is because the results of machine learning are not determined, but the learning of input data creates the objective function, which enables the determination of new data. In addition, the increase in accumulated data affects the accuracy of machine learning results. The data collected here is an important factor in machine learning. The proposed system is a convergence system of cloud systems and local fog systems for service delivery. Thus, the cloud system provides machine learning and infrastructure for services, while the fog system is located in the middle of the cloud and the user to collect and refine data. The data for this application shall be based on the Sensitive data generated by smart devices. The machine learning technique applied to this system uses SVM algorithm for classification and RNN algorithm for status recognition.

Estimating GARCH models using kernel machine learning (커널기계 기법을 이용한 일반화 이분산자기회귀모형 추정)

  • Hwang, Chang-Ha;Shin, Sa-Im
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.419-425
    • /
    • 2010
  • Kernel machine learning is gaining a lot of popularities in analyzing large or high dimensional nonlinear data. We use this technique to estimate a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we show that GARCH models can be estimated using kernel machine learning and that kernel machine has a higher predicting ability than ML methods and support vector machine, when estimating volatility of financial time series data with fat tail.