• Title/Summary/Keyword: 데이터 프라이버시

Search Result 453, Processing Time 0.025 seconds

A Study on Privacy Attitude and Protection Intent of MyData Users: The Effect of Privacy cynicism (마이데이터 이용자의 프라이버시 태도와 보호의도에 관한 연구: 프라이버시 냉소주의의 영향)

  • Jung, Hae-Jin;Lee, Jin-Hyuk
    • Informatization Policy
    • /
    • v.29 no.2
    • /
    • pp.37-65
    • /
    • 2022
  • This article analyzes the relationship between the privacy attitudes of MyData users and the four dimensions of privacy cynicism (distrust, uncertainty, powerlessness, and resignation) as to privacy protection intentions through a structural equation model. It was examined that MyData user's internet skills had a statistically significant negative effect on 'resignation' among the privacy cynicism dimensions. Secondly, privacy risks have a positive effect on 'distrust' in MyData operators, 'uncertainty' in privacy control, and 'powerlessness' in terms of privacy cynicism. Thirdly, it was analyzed that privacy concerns have a positive effect on the privacy cynicism dimensions of 'distrust' and 'uncertainty', with 'resignation' showing a negative effect. Fourthly, it was found that only 'resignation' as a dimension of privacy cynicism showed a negative effect on privacy protection intention. Overall, MyData user's internet skills was analyzed as a variable that could alleviate privacy cynicism. Privacy risks are a variable that reinforces privacy cynicism, and privacy concerns reinforce privacy cynicism. In terms of privacy cynicism, 'resignation' offsets privacy concerns and lowers privacy protection intentions.

Models for Privacy-preserving Data Publishing : A Survey (프라이버시 보호 데이터 배포를 위한 모델 조사)

  • Kim, Jongseon;Jung, Kijung;Lee, Hyukki;Kim, Soohyung;Kim, Jong Wook;Chung, Yon Dohn
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.195-207
    • /
    • 2017
  • In recent years, data are actively exploited in various fields. Hence, there is a strong demand for sharing and publishing data. However, sensitive information regarding people can breach the privacy of an individual. To publish data while protecting an individual's privacy with minimal information distortion, the privacy- preserving data publishing(PPDP) has been explored. PPDP assumes various attacker models and has been developed according to privacy models which are principles to protect against privacy breaching attacks. In this paper, we first present the concept of privacy breaching attacks. Subsequently, we classify the privacy models according to the privacy breaching attacks. We further clarify the differences and requirements of each privacy model.

Privacy Preserving Clustering (프라이버시를 보존하는 군집화)

  • Yoo Hyun-Jin;Kim Min-Ho;Ramakrishna R.S.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.473-476
    • /
    • 2004
  • 본 논문에서는 프라이버시를 침해 하지 않는 데이터 마이닝에 대해 다룬다. 방대한 데이터에서 유용한 정보를 추출하는 데이터 마이닝분야에서 데이터로부터 프라이버시 보존의 중요성이 부각되고 있다. 그래서 프라이버시의 침해를 막기 위한 방법으로 실제 데이터를 사용하지 않고 잡음이 들어간 데이터를 사용한다. 그리고 프라이버시를 침해하지 않기 위해 잡음이 들어간 데이터로부터 데이터의 확률 밀도 함수(PDF)만을 복원한다. 이렇게 복원된 확률 밀도 함수만을 이용하여 데이터 마이닝기술, 예를 들면 분류화에 곧바로 적용함으로써 프라이버시를 보존하는 것이다. 하지만 분류화에 사용되는 데이터의 1차원적인 확률 밀도 함수만 가지고는 군집화에 사용하기가 부적절하다. 따라서 본 논문에서는 군집화를 하기 위해 잡음이 들어간 데이터로부터 결합 확률 밀도 함수(Joint PDF)를 복원하고, 복원된 결합 확률 밀도 함수만 가지고 군집화를 할 수 있는 방법을 다룬다.

  • PDF

The Privacy Safety of Public Data: A Case Study on Medical Statistics HIRA-NPS 2011 (공개 데이터의 프라이버시 안전성: 진료정보 통계자료 HIRA-NPS 2011 사례 분석)

  • Kim, Soohyung;Chung, Yon Dohn;Lee, Ki Yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.786-789
    • /
    • 2013
  • 개인정보가 포함된 데이터가 많은 기관에서 다양한 목적을 위해 배포되고 있다. 이러한 공개 데이터는 프라이버시 문제를 야기할 수 있기 때문에, 배포에 앞서 항상 데이터에 대한 프라이버시 보호가 고려되어야 한다. 그러나 현재 배포되는 많은 데이터는 충분하지 못한 프라이버시 보호 과정을 거쳐 배포되고 있다. 이 논문에서는 개인정보를 포함하는 데이터의 프라이버시 안전성을 분석한다. 이를 위해 우리는 건강보험심사평가원에서 배포한 2011년 진료정보 통계자료(HIRA-NPS)를 실험에 사용한다. 분석을 위해 기존에 널리 쓰이는 프라이버시 보호 모델 k-익명성(k-anonymity)과 l-다양성(l-diversity)을 차용하여 안전성 판단의 척도를 정의한다. 또한 실제 데이터에 이 척도를 적용하여 프라이버시 안전성을 측정하고, 그 결과가 갖는 의미를 분석한다.

Case Study on Local Differential Privacy in Practice : Privacy Preserving Survey (로컬 차분 프라이버시 실제 적용 사례연구 : 프라이버시 보존형 설문조사)

  • Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.141-156
    • /
    • 2020
  • Differential privacy, which used to collect and analysis data and preserve data privacy, has been applied widely in data privacy preserving data application. Local differential privacy algorithm which is the local model of differential privacy is used to user who add noise to his data himself with randomized response by self and release his own data. So, user can be preserved his data privacy and data analyst can make a statistical useful data by collected many data. Local differential privacy method has been used by global companies which are Google, Apple and Microsoft to collect and analyze data from users. In this paper, we compare and analyze the local differential privacy methods which used in practically. And then, we study applicability that applying the local differential privacy method in survey or opinion poll scenario in practically.

Privacy Preserving Data Mining Methods and Metrics Analysis (프라이버시 보존형 데이터 마이닝 방법 및 척도 분석)

  • Hong, Eun-Ju;Hong, Do-won;Seo, Chang-Ho
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.445-452
    • /
    • 2018
  • In a world where everything in life is being digitized, the amount of data is increasing exponentially. These data are processed into new data through collection and analysis. New data is used for a variety of purposes in hospitals, finance, and businesses. However, since existing data contains sensitive information of individuals, there is a fear of personal privacy exposure during collection and analysis. As a solution, there is privacy-preserving data mining (PPDM) technology. PPDM is a method of extracting useful information from data while preserving privacy. In this paper, we investigate PPDM and analyze various measures for evaluating the privacy and utility of data.

An Extended Role-based Access Control Model with Privacy Enforcement (프라이버시 보호를 갖는 확장된 역할기반 접근제어 모델)

  • 박종화;김동규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1076-1085
    • /
    • 2004
  • Privacy enforcement has been one of the most important problems in IT area. Privacy protection can be achieved by enforcing privacy policies within an organization's data processing systems. Traditional security models are more or less inappropriate for enforcing basic privacy requirements, such as privacy binding. This paper proposes an extended role-based access control (RBAC) model for enforcing privacy policies within an organization. For providing privacy protection and context based access control, this model combines RBAC, Domain-Type Enforcement, and privacy policies Privacy policies are to assign privacy levels to user roles according to their tasks and to assign data privacy levels to data according to consented consumer privacy preferences recorded as data usage policies. For application of this model, small hospital model is considered.

A Study on Synthetic Data Generation Based Safe Differentially Private GAN (차분 프라이버시를 만족하는 안전한 GAN 기반 재현 데이터 생성 기술 연구)

  • Kang, Junyoung;Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.945-956
    • /
    • 2020
  • The publication of data is essential in order to receive high quality services from many applications. However, if the original data is published as it is, there is a risk that sensitive information (political tendency, disease, ets.) may reveal. Therefore, many research have been proposed, not the original data but the synthetic data generating and publishing to privacy preserve. but, there is a risk of privacy leakage still even if simply generate and publish the synthetic data by various attacks (linkage attack, inference attack, etc.). In this paper, we propose a synthetic data generation algorithm in which privacy preserved by applying differential privacy the latest privacy protection technique to GAN, which is drawing attention as a synthetic data generative model in order to prevent the leakage of such sensitive information. The generative model used CGAN for efficient learning of labeled data, and applied Rényi differential privacy, which is relaxation of differential privacy, considering the utility aspects of the data. And validation of the utility of the generated data is conducted and compared through various classifiers.

Differential Privacy Technology Resistant to the Model Inversion Attack in AI Environments (AI 환경에서 모델 전도 공격에 안전한 차분 프라이버시 기술)

  • Park, Cheollhee;Hong, Dowon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.589-598
    • /
    • 2019
  • The amount of digital data a is explosively growing, and these data have large potential values. Countries and companies are creating various added values from vast amounts of data, and are making a lot of investments in data analysis techniques. The privacy problem that occurs in data analysis is a major factor that hinders data utilization. Recently, as privacy violation attacks on neural network models have been proposed. researches on artificial neural network technology that preserves privacy is required. Therefore, various privacy preserving artificial neural network technologies have been studied in the field of differential privacy that ensures strict privacy. However, there are problems that the balance between the accuracy of the neural network model and the privacy budget is not appropriate. In this paper, we study differential privacy techniques that preserve the performance of a model within a given privacy budget and is resistant to model inversion attacks. Also, we analyze the resistance of model inversion attack according to privacy preservation strength.

Noise Averaging Effect on Privacy-Preserving Clustering of Time-Series Data (시계열 데이터의 프라이버시 보호 클러스터링에서 노이즈 평준화 효과)

  • Moon, Yang-Sae;Kim, Hea-Suk
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.356-360
    • /
    • 2010
  • Recently, there have been many research efforts on privacy-preserving data mining. In privacy-preserving data mining, accuracy preservation of mining results is as important as privacy preservation. Random perturbation privacy-preserving data mining technique is known to well preserve privacy. However, it has a problem that it destroys distance orders among time-series. In this paper, we propose a notion of the noise averaging effect of piecewise aggregate approximation(PAA), which can be preserved the clustering accuracy as high as possible in time-series data clustering. Based on the noise averaging effect, we define the PAA distance in computing distance. And, we show that our PAA distance can alleviate the problem of destroying distance orders in random perturbing time series.