• Title/Summary/Keyword: Federated Model

Search Result 63, Processing Time 0.023 seconds

Time Series Crime Prediction Using a Federated Machine Learning Model

  • Salam, Mustafa Abdul;Taha, Sanaa;Ramadan, Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.119-130
    • /
    • 2022
  • Crime is a common social problem that affects the quality of life. As the number of crimes increases, it is necessary to build a model to predict the number of crimes that may occur in a given period, identify the characteristics of a person who may commit a particular crime, and identify places where a particular crime may occur. Data privacy is the main challenge that organizations face when building this type of predictive models. Federated learning (FL) is a promising approach that overcomes data security and privacy challenges, as it enables organizations to build a machine learning model based on distributed datasets without sharing raw data or violating data privacy. In this paper, a federated long short- term memory (LSTM) model is proposed and compared with a traditional LSTM model. Proposed model is developed using TensorFlow Federated (TFF) and the Keras API to predict the number of crimes. The proposed model is applied on the Boston crime dataset. The proposed model's parameters are fine tuned to obtain minimum loss and maximum accuracy. The proposed federated LSTM model is compared with the traditional LSTM model and found that the federated LSTM model achieved lower loss, better accuracy, and higher training time than the traditional LSTM model.

Performance Analysis of Building Change Detection Algorithm (연합학습 기반 자치구별 건물 변화탐지 알고리즘 성능 분석)

  • Kim Younghyun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.233-244
    • /
    • 2023
  • Although artificial intelligence and machine learning technologies have been used in various fields, problems with personal information protection have arisen based on centralized data collection and processing. Federated learning has been proposed to solve this problem. Federated learning is a process in which clients who own data in a distributed data environment learn a model using their own data and collectively create an artificial intelligence model by centrally collecting learning results. Unlike the centralized method, Federated learning has the advantage of not having to send the client's data to the central server. In this paper, we quantitatively present the performance improvement when federated learning is applied using the building change detection learning data. As a result, it has been confirmed that the performance when federated learning was applied was about 29% higher on average than the performance when it was not applied. As a future work, we plan to propose a method that can effectively reduce the number of federated learning rounds to improve the convergence time of federated learning.

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis

  • Wu, Shengbin;Wang, Yibai
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.441-452
    • /
    • 2021
  • Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task's similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.

Dynamic Window Adjustment and Model Stability Improvement Algorithm for K-Asynchronous Federated Learning (K-비동기식 연합학습의 동적 윈도우 조절과 모델 안정성 향상 알고리즘)

  • HyoSang Kim;Taejoon Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.21-34
    • /
    • 2023
  • Federated Learning is divided into synchronous federated learning and asynchronous federated learning. Asynchronous federated learning has a time advantage over synchronous federated learning, but asynchronous federated learning still has some challenges to obtain better performance. In particular, preventing performance degradation in non-IID training datasets, selecting appropriate clients, and managing stale gradient information are important for improving model performance. In this paper, we deal with K-asynchronous federated learning by using non-IID datasets. In addition, unlike traditional method using static K, we proposed an algorithm that adaptively adjusts K and we can reduce the learning time. Additionally, the we show that model performance is improved by using stale gradient handling method. Finally, we use a method of judging model performance to obtain strong model stability. Experiment results show that overall algorithm can obtain advantages of reducing training time, improving model accuracy, and improving model stability.

Construction of Incremental Federated Learning System using Flower (Flower을 사용한 점진적 연합학습시스템 구성)

  • Yun-Hee Kang;Myungju Kang
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.80-88
    • /
    • 2023
  • To construct a learning model in the field of artificial intelligence, a dataset should be collected and be delivered to the central server where the learning model is constructed. Federated learning is a machine learning method building a global learning model without transmitting data located in a client side in a collaborative manner. It can be used to protect privacy, and after constructing a local trained model on individual clients, the parameters of the local model are aggregated centrally to update the global model. In this paper, we reuse the existing learning parameter to improve federated learning, describe incremental federated learning. For this work, we do experiments using the federated learning framework named Flower, and evaluate the experiment results with regard to elapsed time and precision when executing optimization algorithms.

  • PDF

A Survey on Privacy Vulnerabilities through Logit Inversion in Distillation-based Federated Learning (증류 기반 연합 학습에서 로짓 역전을 통한 개인 정보 취약성에 관한 연구)

  • Subin Yun;Yungi Cho;Yunheung Paek
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.711-714
    • /
    • 2024
  • In the dynamic landscape of modern machine learning, Federated Learning (FL) has emerged as a compelling paradigm designed to enhance privacy by enabling participants to collaboratively train models without sharing their private data. Specifically, Distillation-based Federated Learning, like Federated Learning with Model Distillation (FedMD), Federated Gradient Encryption and Model Sharing (FedGEMS), and Differentially Secure Federated Learning (DS-FL), has arisen as a novel approach aimed at addressing Non-IID data challenges by leveraging Federated Learning. These methods refine the standard FL framework by distilling insights from public dataset predictions, securing data transmissions through gradient encryption, and applying differential privacy to mask individual contributions. Despite these innovations, our survey identifies persistent vulnerabilities, particularly concerning the susceptibility to logit inversion attacks where malicious actors could reconstruct private data from shared public predictions. This exploration reveals that even advanced Distillation-based Federated Learning systems harbor significant privacy risks, challenging the prevailing assumptions about their security and underscoring the need for continued advancements in secure Federated Learning methodologies.

The Study on the Implementation Approach of MLOps on Federated Learning System (연합학습시스템에서의 MLOps 구현 방안 연구)

  • Hong, Seung-hoo;Lee, KangYoon
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.97-110
    • /
    • 2022
  • Federated learning is a learning method capable of performing model learning without transmitting learning data. The IoT or healthcare field is sensitive to information leakage as it deals with users' personal information, so a lot of attention should be paid to system design, but when using federated-learning, data does not move from devices where data is collected. Accordingly, many federated-learning implementations have been developed, but detailed research on system design for the development and operation of systems using federated learning is insufficient. This study shows that measures for the life cycle, code version management, model serving, and device monitoring of federated learning are needed to be applied to actual projects and distributed to IoT devices, and we propose a design for a development environment that complements these points. The system proposed in this paper considered uninterrupted model-serving and includes source code and model version management, device state monitoring, and server-client learning schedule management.

Federated Information Mode-Matched Filters in ACC Environment

  • Kim Yong-Shik;Hong Keum-Shik
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, a target tracking algorithm for tracking maneuvering vehicles is presented. The overall algorithm belongs to the category of an interacting multiple-model (IMM) algorithm used to detect multiple targets using fused information from multiple sensors. First, two kinematic models are derived: a constant velocity model for linear motions, and a constant-speed turn model for curvilinear motions. Fpr the constant-speed turn model, a nonlinear information filter is used in place of the extended Kalman filter. Being equivalent to the Kalman filter (KF) algebraically, the information filter is extended to N-sensor distributed dynamic systems. The model-matched filter used in multi-sensor environments takes the form of a federated nonlinear information filter. In multi-sensor environments, the information-based filter is easier to decentralize, initialize, and fuse than a KF-based filter. In this paper, the structural features and information sharing principle of the federated information filter are discussed. The performance of the suggested algorithm using a Monte Carlo simulation under the two patterns is evaluated.

Self-supervised Meta-learning for the Application of Federated Learning on the Medical Domain (연합학습의 의료분야 적용을 위한 자기지도 메타러닝)

  • Kong, Heesan;Kim, Kwangsu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.27-40
    • /
    • 2022
  • Medical AI, which has lately made significant advances, is playing a vital role, such as assisting clinicians with diagnosis and decision-making. The field of chest X-rays, in particular, is attracting a lot of attention since it is important for accessibility and identification of chest diseases, as well as the current COVID-19 pandemic. However, despite the vast amount of data, there remains a limit to developing an effective AI model due to a lack of labeled data. A research that used federated learning on chest X-ray data to lessen this difficulty has emerged, although it still has the following limitations. 1) It does not consider the problems that may occur in the Non-IID environment. 2) Even in the federated learning environment, there is still a shortage of labeled data of clients. We propose a method to solve the above problems by using the self-supervised learning model as a global model of federated learning. To that aim, we investigate a self-supervised learning methods suited for federated learning using chest X-ray data and demonstrate the benefits of adopting the self-supervised learning model for federated learning.