• Title/Summary/Keyword: Distributed Differential Privacy

Search Result 4, Processing Time 0.023 seconds

Privacy-Preserving Aggregation of IoT Data with Distributed Differential Privacy

  • Lim, Jong-Hyun;Kim, Jong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.6
    • /
    • pp.65-72
    • /
    • 2020
  • Today, the Internet of Things is used in many places, including homes, industrial sites, and hospitals, to give us convenience. Many services generate new value through real-time data collection, storage and analysis as devices are connected to the network. Many of these fields are creating services and applications that utilize sensors and communication functions within IoT devices. However, since everything can be hacked, it causes a huge privacy threat to users who provide data. For example, a variety of sensitive information, such as personal information, lifestyle patters and the existence of diseases, will be leaked if data generated by smarwatches are abused. Development of IoT must be accompanied by the development of security. Recently, Differential Privacy(DP) was adopted to privacy-preserving data processing. So we propose the method that can aggregate health data safely on smartwatch platform, based on DP.

Clustering-Based Federated Learning for Enhancing Data Privacy in Internet of Vehicles

  • Zilong Jin;Jin Wang;Lejun Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1462-1477
    • /
    • 2024
  • With the evolving complexity of connected vehicle features, the volume and diversity of data generated during driving continue to escalate. Enabling data sharing among interconnected vehicles holds promise for improving users' driving experiences and alleviating traffic congestion. Yet, the unintentional disclosure of users' private information through data sharing poses a risk, potentially compromising the interests of vehicle users and, in certain cases, endangering driving safety. Federated learning (FL) is a newly emerged distributed machine learning paradigm, which is expected to play a prominent role for privacy-preserving learning in autonomous vehicles. While FL holds significant potential to enhance the architecture of the Internet of Vehicles (IoV), the dynamic mobility of vehicles poses a considerable challenge to integrating FL with vehicular networks. In this paper, a novel clustered FL framework is proposed which is efficient for reducing communication and protecting data privacy. By assessing the similarity among feature vectors, vehicles are categorized into distinct clusters. An optimal vehicle is elected as the cluster head, which enhances the efficiency of personalized data processing and model training while reducing communication overhead. Simultaneously, the Local Differential Privacy (LDP) mechanism is incorporated during local training to safeguard vehicle privacy. The simulation results obtained from the 20newsgroups dataset and the MNIST dataset validate the effectiveness of the proposed scheme, indicating that the proposed scheme can ensure data privacy effectively while reducing communication overhead.

A Group based Privacy-preserving Data Perturbation Technique in Distributed OSN (분산 OSN 환경에서 프라이버시 보호를 위한 그룹 기반의 데이터 퍼튜베이션 기법)

  • Lee, Joohyoung;Park, Seog
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.675-680
    • /
    • 2016
  • The development of various mobile devices and mobile platform technology has led to a steady increase in the number of online social network (OSN) users. OSN users are free to communicate and share information through activities such as social networking, but this causes a new, user privacy issue. Various distributed OSN architectures are introduced to address the user privacy concern, however, users do not obtain technically perfect control over their data. In this study, the control rights of OSN user are maintained by using personal data storage (PDS). We propose a technique to improve data privacy protection that involves making a group with the user's friend by generating and providing fake text data based on user's real text data. Fake text data is generated based on the user's word sensitivity value, so that the user's friends can receive the user's differential data. As a result, we propose a system architecture that solves possible problems in the tradeoff between service utility and user privacy in OSN.

Utility Analysis of Federated Learning Techniques through Comparison of Financial Data Performance (금융데이터의 성능 비교를 통한 연합학습 기법의 효용성 분석)

  • Jang, Jinhyeok;An, Yoonsoo;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.405-416
    • /
    • 2022
  • Current AI technology is improving the quality of life by using machine learning based on data. When using machine learning, transmitting distributed data and collecting it in one place goes through a de-identification process because there is a risk of privacy infringement. De-identification data causes information damage and omission, which degrades the performance of the machine learning process and complicates the preprocessing process. Accordingly, Google announced joint learning in 2016, a method of de-identifying data and learning without the process of collecting data into one server. This paper analyzed the effectiveness by comparing the difference between the learning performance of data that went through the de-identification process of K anonymity and differential privacy reproduction data using actual financial data. As a result of the experiment, the accuracy of original data learning was 79% for k=2, 76% for k=5, 52% for k=7, 50% for 𝜖=1, and 82% for 𝜖=0.1, and 86% for Federated learning.