• Title/Summary/Keyword: privacy data graph

Search Result 17, Processing Time 0.025 seconds

An Uncertain Graph Method Based on Node Random Response to Preserve Link Privacy of Social Networks

  • Jun Yan;Jiawang Chen;Yihui Zhou;Zhenqiang Wu;Laifeng Lu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.147-169
    • /
    • 2024
  • In pace with the development of network technology at lightning speed, social networks have been extensively applied in our lives. However, as social networks retain a large number of users' sensitive information, the openness of this information makes social networks vulnerable to attacks by malicious attackers. To preserve the link privacy of individuals in social networks, an uncertain graph method based on node random response is devised, which satisfies differential privacy while maintaining expected data utility. In this method, to achieve privacy preserving, the random response is applied on nodes to achieve edge modification on an original graph and node differential privacy is introduced to inject uncertainty on the edges. Simultaneously, to keep data utility, a divide and conquer strategy is adopted to decompose the original graph into many sub-graphs and each sub-graph is dealt with separately. In particular, only some larger sub-graphs selected by the exponent mechanism are modified, which further reduces the perturbation to the original graph. The presented method is proven to satisfy differential privacy. The performances of experiments demonstrate that this uncertain graph method can effectively provide a strict privacy guarantee and maintain data utility.

ShareSafe: An Improved Version of SecGraph

  • Tang, Kaiyu;Han, Meng;Gu, Qinchen;Zhou, Anni;Beyah, Raheem;Ji, Shouling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5731-5754
    • /
    • 2019
  • In this paper, we redesign, implement, and evaluate ShareSafe (Based on SecGraph), an open-source secure graph data sharing/publishing platform. Within ShareSafe, we propose De-anonymization Quantification Module and Recommendation Module. Besides, we model the attackers' background knowledge and evaluate the relation between graph data privacy and the structure of the graph. To the best of our knowledge, ShareSafe is the first platform that enables users to perform data perturbation, utility evaluation, De-A evaluation, and Privacy Quantification. Leveraging ShareSafe, we conduct a more comprehensive and advanced utility and privacy evaluation. The results demonstrate that (1) The risk of privacy leakage of anonymized graph increases with the attackers' background knowledge. (2) For a successful de-anonymization attack, the seed mapping, even relatively small, plays a much more important role than the auxiliary graph. (3) The structure of graph has a fundamental and significant effect on the utility and privacy of the graph. (4) There is no optimal anonymization/de-anonymization algorithm. For different environment, the performance of each algorithm varies from each other.

Anonymizing Graphs Against Weight-based Attacks with Community Preservation

  • Li, Yidong;Shen, Hong
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • The increasing popularity of graph data, such as social and online communities, has initiated a prolific research area in knowledge discovery and data mining. As more real-world graphs are released publicly, there is growing concern about privacy breaching for the entities involved. An adversary may reveal identities of individuals in a published graph, with the topological structure and/or basic graph properties as background knowledge. Many previous studies addressing such attacks as identity disclosure, however, concentrate on preserving privacy in simple graph data only. In this paper, we consider the identity disclosure problem in weighted graphs. The motivation is that, a weighted graph can introduce much more unique information than its simple version, which makes the disclosure easier. We first formalize a general anonymization model to deal with weight-based attacks. Then two concrete attacks are discussed based on weight properties of a graph, including the sum and the set of adjacent weights for each vertex. We also propose a complete solution for the weight anonymization problem to prevent a graph from both attacks. In addition, we also investigate the impact of the proposed methods on community detection, a very popular application in the graph mining field. Our approaches are efficient and practical, and have been validated by extensive experiments on both synthetic and real-world datasets.

A Privacy-aware Graph-based Access Control System for the Healthcare Domain

  • Tian, Yuan;Song, Biao;Hassan, M.Mehedi.;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2708-2730
    • /
    • 2012
  • The growing concern for the protection of personal information has made it critical to implement effective technologies for privacy and data management. By observing the limitations of existing approaches, we found that there is an urgent need for a flexible, privacy-aware system that is able to meet the privacy preservation needs at both the role levels and the personal levels. We proposed a conceptual system that considered these two requirements: a graph-based, access control model to safeguard patient privacy. We present a case study of the healthcare field in this paper. While our model was tested in the field of healthcare, it is generic and can be adapted to use in other fields. The proof-of-concept demos were also provided with the aim of valuating the efficacy of our system. In the end, based on the hospital scenarios, we present the experimental results to demonstrate the performance of our system, and we also compared those results to existing privacy-aware systems. As a result, we ensured a high quality of medical care service by preserving patient privacy.

Privacy measurement method using a graph structure on online social networks

  • Li, XueFeng;Zhao, Chensu;Tian, Keke
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.812-824
    • /
    • 2021
  • Recently, with an increase in Internet usage, users of online social networks (OSNs) have increased. Consequently, privacy leakage has become more serious. However, few studies have investigated the difference between privacy and actual behaviors. In particular, users' desire to change their privacy status is not supported by their privacy literacy. Presenting an accurate measurement of users' privacy status can cultivate the privacy literacy of users. However, the highly interactive nature of interpersonal communication on OSNs has promoted privacy to be viewed as a communal issue. As a large number of redundant users on social networks are unrelated to the user's privacy, existing algorithms are no longer applicable. To solve this problem, we propose a structural similarity measurement method suitable for the characteristics of social networks. The proposed method excludes redundant users and combines the attribute information to measure the privacy status of users. Using this approach, users can intuitively recognize their privacy status on OSNs. Experiments using real data show that our method can effectively and accurately help users improve their privacy disclosures.

Privacy-assured Boolean Adjacent Vertex Search over Encrypted Graph Data in Cloud Computing

  • Zhu, Hong;Wu, Bin;Xie, Meiyi;Cui, Zongmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5171-5189
    • /
    • 2016
  • With the popularity of cloud computing, many data owners outsource their graph data to the cloud for cost savings. The cloud server is not fully trusted and always wants to learn the owners' contents. To protect the information hiding, the graph data have to be encrypted before outsourcing to the cloud. The adjacent vertex search is a very common operation, many other operations can be built based on the adjacent vertex search. A boolean adjacent vertex search is an important basic operation, a query user can get the boolean search results. Due to the graph data being encrypted on the cloud server, a boolean adjacent vertex search is a quite difficult task. In this paper, we propose a solution to perform the boolean adjacent vertex search over encrypted graph data in cloud computing (BASG), which maintains the query tokens and search results privacy. We use the Gram-Schmidt algorithm and achieve the boolean expression search in our paper. We formally analyze the security of our scheme, and the query user can handily get the boolean search results by this scheme. The experiment results with a real graph data set demonstrate the efficiency of our scheme.

Workspace Visibility Graph Analysis (VGA) for Concentration Privacy and Group Relations in the Open-Plan Office Environment

  • Hong, Yeon-Koo;Yoo, Uoo-Sang
    • Architectural research
    • /
    • v.12 no.1
    • /
    • pp.9-14
    • /
    • 2010
  • The present study explored the applicability of Visibility Graph Analysis (VGA) techniques to workplace design research. Six types of VGA measures in Depthmap encompassing visual connectivity, three types of visual integration, mean depth, and visual entropy were employed for the analysis of individual privacy for task concentration and group relationship behavior in the open-plan office environment. Data comprised 136 workers in 6 open-plan offices filled with low-paneled (1.2-1.5m) cubicle workspaces. For the statistical analysis, Spearman's rho correlations and t-tests were applied for the spatial and behavioral measures. The results showed that workspace VGA measures have a potential to be useful information to account for workers' concentration privacy and, limitedly, also informal relationships with team members. Visual entropy values especially offer reliable information to predict various aspects of office workers' privacy behavior while visual integration can be used to account for the workers' sense of trust in group relations. The study also discussed the limitation of VGA applications to the workplace context.

Similarity measurement based on Min-Hash for Preserving Privacy

  • Cha, Hyun-Jong;Yang, Ho-Kyung;Song, You-Jin
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.240-245
    • /
    • 2022
  • Because of the importance of the information, encryption algorithms are heavily used. Raw data is encrypted and secure, but problems arise when the key for decryption is exposed. In particular, large-scale Internet sites such as Facebook and Amazon suffer serious damage when user data is exposed. Recently, research into a new fourth-generation encryption technology that can protect user-related data without the use of a key required for encryption is attracting attention. Also, data clustering technology using encryption is attracting attention. In this paper, we try to reduce key exposure by using homomorphic encryption. In addition, we want to maintain privacy through similarity measurement. Additionally, holistic similarity measurements are time-consuming and expensive as the data size and scope increases. Therefore, Min-Hash has been studied to efficiently estimate the similarity between two signatures Methods of measuring similarity that have been studied in the past are time-consuming and expensive as the size and area of data increases. However, Min-Hash allowed us to efficiently infer the similarity between the two sets. Min-Hash is widely used for anti-plagiarism, graph and image analysis, and genetic analysis. Therefore, this paper reports privacy using homomorphic encryption and presents a model for efficient similarity measurement using Min-Hash.

Privacy Protection Method for Sensitive Weighted Edges in Social Networks

  • Gong, Weihua;Jin, Rong;Li, Yanjun;Yang, Lianghuai;Mei, Jianping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.540-557
    • /
    • 2021
  • Privacy vulnerability of social networks is one of the major concerns for social science research and business analysis. Most existing studies which mainly focus on un-weighted network graph, have designed various privacy models similar to k-anonymity to prevent data disclosure of vertex attributes or relationships, but they may be suffered from serious problems of huge information loss and significant modification of key properties of the network structure. Furthermore, there still lacks further considerations of privacy protection for important sensitive edges in weighted social networks. To address this problem, this paper proposes a privacy preserving method to protect sensitive weighted edges. Firstly, the sensitive edges are differentiated from weighted edges according to the edge betweenness centrality, which evaluates the importance of entities in social network. Then, the perturbation operations are used to preserve the privacy of weighted social network by adding some pseudo-edges or modifying specific edge weights, so that the bottleneck problem of information flow can be well resolved in key area of the social network. Experimental results show that the proposed method can not only effectively preserve the sensitive edges with lower computation cost, but also maintain the stability of the network structures. Further, the capability of defending against malicious attacks to important sensitive edges has been greatly improved.

Exploiting Friend's Username to De-anonymize Users across Heterogeneous Social Networking Sites (이종 소셜 네트워크 상에서 친구계정의 이름을 이용한 사용자 식별 기법)

  • Kim, Dongkyu;Park, Seog
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1110-1116
    • /
    • 2014
  • Nowadays, social networking sites (SNSs), such as Twitter, LinkedIn, and Tumblr, are coming into the forefront, due to the growth in the number of users. While users voluntarily provide their information in SNSs, privacy leakages resulting from the use of SNSs is becoming a problem owing to the evolution of large data processing techniques and the raising awareness of privacy. In order to solve this problem, the studies on protecting privacy on SNSs, based on graph and machine learning, have been conducted. However, examples of privacy leakages resulting from the advent of a new SNS are consistently being uncovered. In this paper, we propose a technique enabling a user to detect privacy leakages beforehand in the case where the service provider or third-party application developer threatens the SNS user's privacy maliciously.