• Title/Summary/Keyword: 차분프라이버시

Search Result 23, Processing Time 0.029 seconds

차분 프라이버시 기반 비식별화 기술에 대한 연구

  • Jung, Ksngsoo;Park, Seog
    • Review of KIISC
    • /
    • v.28 no.2
    • /
    • pp.61-77
    • /
    • 2018
  • 차분 프라이버시는 통계 데이터베이스 상에서 수행되는 질의 결과에 의한 개인정보 추론을 방지하기 위한 수학적 모델로써 2006년 Dwork에 의해 처음 소개된 이후로 통계 데이터에 대한 프라이버 보호의 표준으로 자리잡고 있다. 차분 프라이버시는 데이터의 삽입/삭제 또는 변형에 의한 질의 결과의 변화량을 일정 수준 이하로 유지함으로써 정보 노출을 제한하는 개념이다. 이를 구현하기 위해 메커니즘 상의 연구(라플라스 메커니즘, 익스퍼넨셜 메커니즘)와 다양한 데이터 분석 환경(히스토그램, 회귀 분석, 의사 결정 트리, 연관 관계 추론, 클러스터링, 딥러닝 등)에 차분 프라이버시를 적용하는 연구들이 수행되어 왔다. 본 논문에서는 처음 Dwork에 의해 제안되었을 때의 차분 프라이버시 개념에 대한 이해부터 오늘날 애플 및 구글에서 차분 프라이버시가 적용되고 있는 수준에 대한 연구들의 진행 상황과 앞으로의 연구 주제에 대해 소개한다.

Case Study on Local Differential Privacy in Practice : Privacy Preserving Survey (로컬 차분 프라이버시 실제 적용 사례연구 : 프라이버시 보존형 설문조사)

  • Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.141-156
    • /
    • 2020
  • Differential privacy, which used to collect and analysis data and preserve data privacy, has been applied widely in data privacy preserving data application. Local differential privacy algorithm which is the local model of differential privacy is used to user who add noise to his data himself with randomized response by self and release his own data. So, user can be preserved his data privacy and data analyst can make a statistical useful data by collected many data. Local differential privacy method has been used by global companies which are Google, Apple and Microsoft to collect and analyze data from users. In this paper, we compare and analyze the local differential privacy methods which used in practically. And then, we study applicability that applying the local differential privacy method in survey or opinion poll scenario in practically.

A Differentially Private K-Means Clustering using Quadtree and Uniform Sampling (쿼드트리와 균등 샘플링를 이용한 효과적 차분 프라이버시 K-평균 클러스터링 알고리즘)

  • Hong, Daeyoung;Goo, Hanjun;Shim, Kyuseok
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.25-26
    • /
    • 2018
  • 최근 데이터를 공개할 때 프라이버시를 보호하기 위한 방법들이 연구되고 있다. 그 중 차분 프라이버시(differential privacy)는 최소성 공격 등에 대해서도 안전함이 증명된 익명화 기법이다. 본 논문에서는 기존 차분 프라이버시 -평균 클러스터링 알고리즘의 성능을 개선하고 실생활 데이터를 이용한 실험을 통해 이를 검증한다.

  • PDF

An Improved Differentially Private Histogram Publication Algorithm (차분 프라이버시 히스토그램 공개 알고리즘의 개선)

  • Goo, Hanjun;Jung, Woohwan;Shim, Kyuseok
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.23-24
    • /
    • 2018
  • 최근 공격자의 사전 지식에 상관없이 개인 정보를 보호할 수 있는 차분 프라이버시 보호 기법에 대한 연구들이 진행되고 있다. 본 논문에서는 차분 프라이버시를 만족시키는 적은 수의 버킷을 가지는 히스토그램 공개 알고리즘을 소개하고 기존 알고리즘이 사용한 휴리스틱 방법의 문제와 개선 방법을 소개한다. 또한, 실험을 통해 개선한 방법이 기존의 알고리즘에 비하여 더 좋은 영역 합 질의 성능을 가지는 것을 보인다.

  • PDF

A Study on Synthetic Data Generation Based Safe Differentially Private GAN (차분 프라이버시를 만족하는 안전한 GAN 기반 재현 데이터 생성 기술 연구)

  • Kang, Junyoung;Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.945-956
    • /
    • 2020
  • The publication of data is essential in order to receive high quality services from many applications. However, if the original data is published as it is, there is a risk that sensitive information (political tendency, disease, ets.) may reveal. Therefore, many research have been proposed, not the original data but the synthetic data generating and publishing to privacy preserve. but, there is a risk of privacy leakage still even if simply generate and publish the synthetic data by various attacks (linkage attack, inference attack, etc.). In this paper, we propose a synthetic data generation algorithm in which privacy preserved by applying differential privacy the latest privacy protection technique to GAN, which is drawing attention as a synthetic data generative model in order to prevent the leakage of such sensitive information. The generative model used CGAN for efficient learning of labeled data, and applied Rényi differential privacy, which is relaxation of differential privacy, considering the utility aspects of the data. And validation of the utility of the generated data is conducted and compared through various classifiers.

Differentially Private k-Means Clustering based on Dynamic Space Partitioning using a Quad-Tree (쿼드 트리를 이용한 동적 공간 분할 기반 차분 프라이버시 k-평균 클러스터링 알고리즘)

  • Goo, Hanjun;Jung, Woohwan;Oh, Seongwoong;Kwon, Suyong;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.288-293
    • /
    • 2018
  • There have recently been several studies investigating how to apply a privacy preserving technique to publish data. Differential privacy can protect personal information regardless of an attacker's background knowledge by adding probabilistic noise to the original data. To perform differentially private k-means clustering, the existing algorithm builds a differentially private histogram and performs the k-means clustering. Since it constructs an equi-width histogram without considering the distribution of data, there are many buckets to which noise should be added. We propose a k-means clustering algorithm using a quad-tree that captures the distribution of data by using a small number of buckets. Our experiments show that the proposed algorithm shows better performance than the existing algorithm.

Differential Privacy Technology Resistant to the Model Inversion Attack in AI Environments (AI 환경에서 모델 전도 공격에 안전한 차분 프라이버시 기술)

  • Park, Cheollhee;Hong, Dowon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.589-598
    • /
    • 2019
  • The amount of digital data a is explosively growing, and these data have large potential values. Countries and companies are creating various added values from vast amounts of data, and are making a lot of investments in data analysis techniques. The privacy problem that occurs in data analysis is a major factor that hinders data utilization. Recently, as privacy violation attacks on neural network models have been proposed. researches on artificial neural network technology that preserves privacy is required. Therefore, various privacy preserving artificial neural network technologies have been studied in the field of differential privacy that ensures strict privacy. However, there are problems that the balance between the accuracy of the neural network model and the privacy budget is not appropriate. In this paper, we study differential privacy techniques that preserve the performance of a model within a given privacy budget and is resistant to model inversion attacks. Also, we analyze the resistance of model inversion attack according to privacy preservation strength.

A Study of Split Learning Model to Protect Privacy (프라이버시 침해에 대응하는 분할 학습 모델 연구)

  • Ryu, Jihyeon;Won, Dongho;Lee, Youngsook
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.49-56
    • /
    • 2021
  • Recently, artificial intelligence is regarded as an essential technology in our society. In particular, the invasion of privacy in artificial intelligence has become a serious problem in modern society. Split learning, proposed at MIT in 2019 for privacy protection, is a type of federated learning technique that does not share any raw data. In this study, we studied a safe and accurate segmentation learning model using known differential privacy to safely manage data. In addition, we trained SVHN and GTSRB on a split learning model to which 15 different types of differential privacy are applied, and checked whether the learning is stable. By conducting a learning data extraction attack, a differential privacy budget that prevents attacks is quantitatively derived through MSE.

Development of Simulation Tool to Support Privacy-Preserving Data Collection (프라이버시 보존 데이터 수집을 지원하기 위한 시뮬레이션 툴 개발)

  • Kim, Dae-Ho;Kim, Jong Wook
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1671-1676
    • /
    • 2017
  • In theses days, data has been explosively generated in diverse industrial areas. Accordingly, many industries want to collect and analyze these data to improve their products or services. However, collecting user data can lead to significant personal information leakage. Local differential privacy (LDP) proposed by Google is the state-of-the-art approach that is used to protect individual privacy in the process of data collection. LDP guarantees that the privacy of the user is protected by perturbing the original data at the user's side, but a data collector is still able to obtain population statistics from collected user data. However, the prevention of leakage of personal information through such data perturbation mechanism may cause the significant reduction in the data utilization. Therefore, the degree of data perturbation in LDP should be set properly depending on the data collection and analysis purposes. Thus, in this paper, we develop the simulation tool which aims to help the data collector to properly chose the degree of data perturbation in LDP by providing her/him visualized simulated results with various parameter configurations.

A Study on a Differentially Private Model for Financial Data (금융 데이터 상에서의 차분 프라이버시 모델 정립 연구)

  • Kim, Hyun-il;Park, Cheolhee;Hong, Dowon;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.6
    • /
    • pp.1519-1534
    • /
    • 2017
  • Data de-identification is the one of the technique that preserves individual data privacy and provides useful information of data to the analyst. However, original de-identification techniques like k-anonymity have vulnerabilities to background knowledge attacks. On the contrary, differential privacy has a lot of researches and studies within several years because it has both strong privacy preserving and useful utility. In this paper, we analyze various models based on differential privacy and formalize a differentially private model on financial data. As a result, we can formalize a differentially private model on financial data and show that it has both security guarantees and good usefulness.