• 제목/요약/키워드: Privacy-preserving data publishing

검색결과 9건 처리시간 0.019초

하둡 분산 환경 기반 프라이버시 보호 빅 데이터 배포 시스템 개발 (Development of a Privacy-Preserving Big Data Publishing System in Hadoop Distributed Computing Environments)

  • 김대호;김종욱
    • 한국멀티미디어학회논문지
    • /
    • 제20권11호
    • /
    • pp.1785-1792
    • /
    • 2017
  • Generally, big data contains sensitive information about individuals, and thus directly releasing it for public use may violate existing privacy requirements. Therefore, privacy-preserving data publishing (PPDP) has been actively researched to share big data containing personal information for public use, while protecting the privacy of individuals with minimal data modification. Recently, with increasing demand for big data sharing in various area, there is also a growing interest in the development of software which supports a privacy-preserving data publishing. Thus, in this paper, we develops the system which aims to effectively and efficiently support privacy-preserving data publishing. In particular, the system developed in this paper enables data owners to select the appropriate anonymization level by providing them the information loss matrix. Furthermore, the developed system is able to achieve a high performance in data anonymization by using distributed Hadoop clusters.

프라이버시 보호 데이터 배포를 위한 모델 조사 (Models for Privacy-preserving Data Publishing : A Survey)

  • 김종선;정기정;이혁기;김수형;김종욱;정연돈
    • 정보과학회 논문지
    • /
    • 제44권2호
    • /
    • pp.195-207
    • /
    • 2017
  • 최근 다양한 분야에서 데이터들이 활발하게 활용되고 있다. 이에 따라 데이터의 공유나 배포를 요구하는 목소리가 높아지고 있다. 그러나 공유된 데이터에 개인과 관련된 민감한 정보가 있을 경우, 개인의 민감한 정보가 드러나는 프라이버시 유출이 발생할 수 있다. 개인 정보가 포함된 데이터를 배포하기 위해 개인의 프라이버시를 보호하면서 데이터를 최소한으로 변형하는 프라이버시 보호 데이터 배포(privacy-preserving data publishing, PPDP)가 연구되어 왔다. 프라이버시 보호 데이터 배포 연구는 다양한 공격자 모델을 가정하고 이러한 공격자의 프라이버시 유출 공격으로부터 프라이버시를 보호하기 위한 원칙인 프라이버시 모델에 따라 발전해왔다. 본 논문에서는 먼저 프라이버시 유출 공격에 대해 알아본다. 그리고 프라이버시 모델들을 프라이버시 유출 공격에 따라 분류하고 각 프라이버시 모델들 간의 차이점과 요구 조건에 대해 알아본다.

A Solution to Privacy Preservation in Publishing Human Trajectories

  • Li, Xianming;Sun, Guangzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3328-3349
    • /
    • 2020
  • With rapid development of ubiquitous computing and location-based services (LBSs), human trajectory data and associated activities are increasingly easily recorded. Inappropriately publishing trajectory data may leak users' privacy. Therefore, we study publishing trajectory data while preserving privacy, denoted privacy-preserving activity trajectories publishing (PPATP). We propose S-PPATP to solve this problem. S-PPATP comprises three steps: modeling, algorithm design and algorithm adjustment. During modeling, two user models describe users' behaviors: one based on a Markov chain and the other based on the hidden Markov model. We assume a potential adversary who intends to infer users' privacy, defined as a set of sensitive information. An adversary model is then proposed to define the adversary's background knowledge and inference method. Additionally, privacy requirements and a data quality metric are defined for assessment. During algorithm design, we propose two publishing algorithms corresponding to the user models and prove that both algorithms satisfy the privacy requirement. Then, we perform a comparative analysis on utility, efficiency and speedup techniques. Finally, we evaluate our algorithms through experiments on several datasets. The experiment results verify that our proposed algorithms preserve users' privay. We also test utility and discuss the privacy-utility tradeoff that real-world data publishers may face.

Privacy-Constrained Relational Data Perturbation: An Empirical Evaluation

  • Deokyeon Jang;Minsoo Kim;Yon Dohn Chung
    • Journal of Information Processing Systems
    • /
    • 제20권4호
    • /
    • pp.524-534
    • /
    • 2024
  • The release of relational data containing personal sensitive information poses a significant risk of privacy breaches. To preserve privacy while publishing such data, it is important to implement techniques that ensure protection of sensitive information. One popular technique used for this purpose is data perturbation, which is popularly used for privacy-preserving data release due to its simplicity and efficiency. However, the data perturbation has some limitations that prevent its practical application. As such, it is necessary to propose alternative solutions to overcome these limitations. In this study, we propose a novel approach to preserve privacy in the release of relational data containing personal sensitive information. This approach addresses an intuitive, syntactic privacy criterion for data perturbation and two perturbation methods for relational data release. Through experiments with synthetic and real data, we evaluate the performance of our methods.

Efficient K-Anonymization Implementation with Apache Spark

  • Kim, Tae-Su;Kim, Jong Wook
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권11호
    • /
    • pp.17-24
    • /
    • 2018
  • Today, we are living in the era of data and information. With the advent of Internet of Things (IoT), the popularity of social networking sites, and the development of mobile devices, a large amount of data is being produced in diverse areas. The collection of such data generated in various area is called big data. As the importance of big data grows, there has been a growing need to share big data containing information regarding an individual entity. As big data contains sensitive information about individuals, directly releasing it for public use may violate existing privacy requirements. Thus, privacy-preserving data publishing (PPDP) has been actively studied to share big data containing personal information for public use, while preserving the privacy of the individual. K-anonymity, which is the most popular method in the area of PPDP, transforms each record in a table such that at least k records have the same values for the given quasi-identifier attributes, and thus each record is indistinguishable from other records in the same class. As the size of big data continuously getting larger, there is a growing demand for the method which can efficiently anonymize vast amount of dta. Thus, in this paper, we develop an efficient k-anonymity method by using Spark distributed framework. Experimental results show that, through the developed method, significant gains in processing time can be achieved.

A Study on Performing Join Queries over K-anonymous Tables

  • Kim, Dae-Ho;Kim, Jong Wook
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권7호
    • /
    • pp.55-62
    • /
    • 2017
  • Recently, there has been an increasing need for the sharing of microdata containing information regarding an individual entity. As microdata usually contains sensitive information on an individual, releasing it directly for public use may violate existing privacy requirements. Thus, to avoid the privacy problems that occur through the release of microdata for public use, extensive studies have been conducted in the area of privacy-preserving data publishing (PPDP). The k-anonymity algorithm, which is the most popular method, guarantees that, for each record, there are at least k-1 other records included in the released data that have the same values for a set of quasi-identifier attributes. Given an original table, the corresponding k-anonymous table is obtained by generalizing each record in the table into an indistinguishable group, called the equivalent class, by replacing the specific values of the quasi-identifier attributes with more general values. However, query processing over the anonymized data is a very challenging task, due to generalized attribute values. In particular, the problem becomes more challenging with an equi-join query (which is the most common type of query in data analysis tasks) over k-anonymous tables, since with the generalized attribute values, it is hard to determine whether two records can be joinable. Thus, to address this challenge, in this paper, we develop a novel scheme that is able to effectively perform an equi-join between k-anonymous tables. The experiment results show that, through the proposed method, significant gains in accuracy over using a naive scheme can be achieved.

Enhanced Regular Expression as a DGL for Generation of Synthetic Big Data

  • Kai, Cheng;Keisuke, Abe
    • Journal of Information Processing Systems
    • /
    • 제19권1호
    • /
    • pp.1-16
    • /
    • 2023
  • Synthetic data generation is generally used in performance evaluation and function tests in data-intensive applications, as well as in various areas of data analytics, such as privacy-preserving data publishing (PPDP) and statistical disclosure limit/control. A significant amount of research has been conducted on tools and languages for data generation. However, existing tools and languages have been developed for specific purposes and are unsuitable for other domains. In this article, we propose a regular expression-based data generation language (DGL) for flexible big data generation. To achieve a general-purpose and powerful DGL, we enhanced the standard regular expressions to support the data domain, type/format inference, sequence and random generation, probability distributions, and resource reference. To efficiently implement the proposed language, we propose caching techniques for both the intermediate and database queries. We evaluated the proposed improvement experimentally.

Privacy Disclosure and Preservation in Learning with Multi-Relational Databases

  • Guo, Hongyu;Viktor, Herna L.;Paquet, Eric
    • Journal of Computing Science and Engineering
    • /
    • 제5권3호
    • /
    • pp.183-196
    • /
    • 2011
  • There has recently been a surge of interest in relational database mining that aims to discover useful patterns across multiple interlinked database relations. It is crucial for a learning algorithm to explore the multiple inter-connected relations so that important attributes are not excluded when mining such relational repositories. However, from a data privacy perspective, it becomes difficult to identify all possible relationships between attributes from the different relations, considering a complex database schema. That is, seemingly harmless attributes may be linked to confidential information, leading to data leaks when building a model. Thus, we are at risk of disclosing unwanted knowledge when publishing the results of a data mining exercise. For instance, consider a financial database classification task to determine whether a loan is considered high risk. Suppose that we are aware that the database contains another confidential attribute, such as income level, that should not be divulged. One may thus choose to eliminate, or distort, the income level from the database to prevent potential privacy leakage. However, even after distortion, a learning model against the modified database may accurately determine the income level values. It follows that the database is still unsafe and may be compromised. This paper demonstrates this potential for privacy leakage in multi-relational classification and illustrates how such potential leaks may be detected. We propose a method to generate a ranked list of subschemas that maintains the predictive performance on the class attribute, while limiting the disclosure risk, and predictive accuracy, of confidential attributes. We illustrate and demonstrate the effectiveness of our method against a financial database and an insurance database.

Secure Healthcare Management: Protecting Sensitive Information from Unauthorized Users

  • Ko, Hye-Kyeong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권1호
    • /
    • pp.82-89
    • /
    • 2021
  • Recently, applications are increasing the importance of security for published documents. This paper deals with data-publishing where the publishers must state sensitive information that they need to protect. If a document containing such sensitive information is accidentally posted, users can use common-sense reasoning to infer unauthorized information. In recent studied of peer-to-peer databases, studies on the security of data of various unique groups are conducted. In this paper, we propose a security framework that fundamentally blocks user inference about sensitive information that may be leaked by XML constraints and prevents sensitive information from leaking from general user. The proposed framework protects sensitive information disclosed through encryption technology. Moreover, the proposed framework is query view security without any three types of XML constraints. As a result of the experiment, the proposed framework has mathematically proved a way to prevent leakage of user information through data inference more than the existing method.