• Title/Summary/Keyword: Privacy preserving

Search Result 248, Processing Time 0.026 seconds

Anonymizing Graphs Against Weight-based Attacks with Community Preservation

  • Li, Yidong;Shen, Hong
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • The increasing popularity of graph data, such as social and online communities, has initiated a prolific research area in knowledge discovery and data mining. As more real-world graphs are released publicly, there is growing concern about privacy breaching for the entities involved. An adversary may reveal identities of individuals in a published graph, with the topological structure and/or basic graph properties as background knowledge. Many previous studies addressing such attacks as identity disclosure, however, concentrate on preserving privacy in simple graph data only. In this paper, we consider the identity disclosure problem in weighted graphs. The motivation is that, a weighted graph can introduce much more unique information than its simple version, which makes the disclosure easier. We first formalize a general anonymization model to deal with weight-based attacks. Then two concrete attacks are discussed based on weight properties of a graph, including the sum and the set of adjacent weights for each vertex. We also propose a complete solution for the weight anonymization problem to prevent a graph from both attacks. In addition, we also investigate the impact of the proposed methods on community detection, a very popular application in the graph mining field. Our approaches are efficient and practical, and have been validated by extensive experiments on both synthetic and real-world datasets.

An Efficient Provable Secure Public Auditing Scheme for Cloud Storage

  • Xu, Chunxiang;Zhang, Yuan;Yu, Yong;Zhang, Xiaojun;Wen, Junwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4226-4241
    • /
    • 2014
  • Cloud storage provides an easy, cost-effective and reliable way of data management for users without the burden of local data storage and maintenance. Whereas, this new paradigm poses many challenges on integrity and privacy of users' data, since users losing grip on their data after outsourcing the data to the cloud server. In order to address these problems, recently, Worku et al. have proposed an efficient privacy-preserving public auditing scheme for cloud storage. However, in this paper, we point out the security flaw existing in the scheme. An adversary, who is on-line and active, is capable of modifying the outsourced data arbitrarily and avoiding the detection by exploiting the security flaw. To fix this security flaw, we further propose a secure and efficient privacy-preserving public auditing scheme, which makes up the security flaw of Worku et al.'s scheme while retaining all the features. Finally, we give a formal security proof and the performance analysis, they show the proposed scheme has much more advantages over the Worku et al.'s scheme.

PEC: A Privacy-Preserving Emergency Call Scheme for Mobile Healthcare Social Networks

  • Liang, Xiaohui;Lu, Rongxing;Chen, Le;Lin, Xiaodong;Shen, Xuemin (Sherman)
    • Journal of Communications and Networks
    • /
    • v.13 no.2
    • /
    • pp.102-112
    • /
    • 2011
  • In this paper, we propose a privacy-preserving emergency call scheme, called PEC, enabling patients in life-threatening emergencies to fast and accurately transmit emergency data to the nearby helpers via mobile healthcare social networks (MHSNs). Once an emergency happens, the personal digital assistant (PDA) of the patient runs the PEC to collect the emergency data including emergency location, patient health record, as well as patient physiological condition. The PEC then generates an emergency call with the emergency data inside and epidemically disseminates it to every user in the patient's neighborhood. If a physician happens to be nearby, the PEC ensures the time used to notify the physician of the emergency is the shortest. We show via theoretical analysis that the PEC is able to provide fine-grained access control on the emergency data, where the access policy is set by patients themselves. Moreover, the PEC can withstandmultiple types of attacks, such as identity theft attack, forgery attack, and collusion attack. We also devise an effective revocation mechanism to make the revocable PEC (rPEC) resistant to inside attacks. In addition, we demonstrate via simulation that the PEC can significantly reduce the response time of emergency care in MHSNs.

Noisy Weighted Data Aggregation for Smart Meter Privacy System (스마트 미터 프라이버시 시스템을 위한 잡음 가중치 데이터 집계)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.49-59
    • /
    • 2018
  • Smart grid system has been deployed fast despite of legal, business and technology problems in many countries. One important problem in deploying the smart grid system is to protect private smart meter readings from the unbelievable parties while the major smart meter functions are untouched. Privacy-preserving involves some challenges such as hardware limitations, secure cryptographic schemes and secure signal processing. In this paper, we focused particularly on the smart meter reading aggregation,which is the major research field in the smart meter privacy-preserving. We suggest a noisy weighted aggregation scheme to guarantee differential privacy. The noisy weighted values are generated in such a way that their product is one and are used for making the veiled measurements. In case that a Diffie-Hellman generator is applied to obtain the noisy weighted values, the noisy values are transformed in such a way that their sum is zero. The advantage of Diffie and Hellman group is usually to use 512 bits. Thus, compared to Paillier cryptosystem series which relies on very large key sizes, a significant performance can be obtained.

Privacy Preserving Sequential Patterns Mining for Network Traffic Data (사이트의 접속 정보 유출이 없는 네트워크 트래픽 데이타에 대한 순차 패턴 마이닝)

  • Kim, Seung-Woo;Park, Sang-Hyun;Won, Jung-Im
    • Journal of KIISE:Databases
    • /
    • v.33 no.7
    • /
    • pp.741-753
    • /
    • 2006
  • As the total amount of traffic data in network has been growing at an alarming rate, many researches to mine traffic data with the purpose of getting useful information are currently being performed. However, network users' privacy can be compromised during the mining process. In this paper, we propose an efficient and practical privacy preserving sequential pattern mining method on network traffic data. In order to discover frequent sequential patterns without violating privacy, our method uses the N-repository server model and the retention replacement technique. In addition, our method accelerates the overall mining process by maintaining the meta tables so as to quickly determine whether candidate patterns have ever occurred. The various experiments with real network traffic data revealed tile efficiency of the proposed method.

A Model for Privacy Preserving Publication of Social Network Data (소셜 네트워크 데이터의 프라이버시 보호 배포를 위한 모델)

  • Sung, Min-Kyung;Chung, Yon-Dohn
    • Journal of KIISE:Databases
    • /
    • v.37 no.4
    • /
    • pp.209-219
    • /
    • 2010
  • Online social network services that are rapidly growing recently store tremendous data and analyze them for many research areas. To enhance the effectiveness of information, companies or public institutions publish their data and utilize the published data for many purposes. However, a social network containing information of individuals may cause a privacy disclosure problem. Eliminating identifiers such as names is not effective for the privacy protection, since private information can be inferred through the structural information of a social network. In this paper, we consider a new complex attack type that uses both the content and structure information, and propose a model, $\ell$-degree diversity, for the privacy preserving publication of the social network data against such attacks. $\ell$-degree diversity is the first model for applying $\ell$-diversity to social network data publication and through the experiments it shows high data preservation rate.

Security Analysis on 'Privacy-Preserving Contact Tracing Specifications by Apple and Google' and Improvement with Verifiable Computations ('애플과 구글의 코로나 접촉 추적 사양'에 대한 보안성 평가 및 검증 가능한 연산을 이용한 개선)

  • Kim, Byeong Yeon;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.3
    • /
    • pp.291-307
    • /
    • 2021
  • There has been global efforts to prevent the further spread of the COVID-19 and get society back to normal. 'Contact tracing' is a crucial way to detect the infected person. However the contact tracing makes another concern about the privacy violation of the personal data of infected people, released by governments. Therefore Google and Apple are announcing a joint effort to enable the use of Bluetooth technology to help governments and health agencies reduce the spread of the virus, with user privacy and security central to the design. However, in order to provide the improved tracing application, it is necessary to identify potential security threats and investigate vulnerabilities for systematically. In this paper, we provide security analysis of Privacy-Preserving COVID-19 Contact Tracing App with STRIDE and LINDDUN threat models. Based on the analysis, we propose to adopt a verifiable computation scheme, Zero-knowledge Succinctness Non-interactive Arguments of Knowledges (zkSNARKs) and Public Key Infrastructure (PKI) to ensure both data integrity and privacy protection in a more practical way.

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.

Systematic Research on Privacy-Preserving Distributed Machine Learning (프라이버시를 보호하는 분산 기계 학습 연구 동향)

  • Min Seob Lee;Young Ah Shin;Ji Young Chun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.76-90
    • /
    • 2024
  • Although artificial intelligence (AI) can be utilized in various domains such as smart city, healthcare, it is limited due to concerns about the exposure of personal and sensitive information. In response, the concept of distributed machine learning has emerged, wherein learning occurs locally before training a global model, mitigating the concentration of data on a central server. However, overall learning phase in a collaborative way among multiple participants poses threats to data privacy. In this paper, we systematically analyzes recent trends in privacy protection within the realm of distributed machine learning, considering factors such as the presence of a central server, distribution environment of the training datasets, and performance variations among participants. In particular, we focus on key distributed machine learning techniques, including horizontal federated learning, vertical federated learning, and swarm learning. We examine privacy protection mechanisms within these techniques and explores potential directions for future research.

Dummy Data Insert Scheme for Privacy Preserving Frequent Itemset Mining in Data Stream (데이터 스트림 빈발항목 마이닝의 프라이버시 보호를 위한 더미 데이터 삽입 기법)

  • Jung, Jay Yeol;Kim, Kee Sung;Jeong, Ik Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.3
    • /
    • pp.383-393
    • /
    • 2013
  • Data stream mining is a technique to obtain the useful information by analyzing the data generated in real time. In data stream mining technology, frequent itemset mining is a method to find the frequent itemset while data is transmitting, and these itemsets are used for the purpose of pattern analyze and marketing in various fields. Existing techniques of finding frequent itemset mining are having problems when a malicious attacker sniffing the data, it reveals data provider's real-time information. These problems can be solved by using a method of inserting dummy data. By using this method, a attacker cannot distinguish the original data from the transmitting data. In this paper, we propose a method for privacy preserving frequent itemset mining by using the technique of inserting dummy data. In addition, the proposed method is effective in terms of calculation because it does not require encryption technology or other mathematical operations.