• Title/Summary/Keyword: privacy protection model

Search Result 181, Processing Time 0.023 seconds

A Framework for measuring query privacy in Location-based Service

  • Zhang, Xuejun;Gui, Xiaolin;Tian, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1717-1732
    • /
    • 2015
  • The widespread use of location-based services (LBSs), which allows untrusted service provider to collect large number of user request records, leads to serious privacy concerns. In response to these issues, a number of LBS privacy protection mechanisms (LPPMs) have been recently proposed. However, the evaluation of these LPPMs usually disregards the background knowledge that the adversary may possess about users' contextual information, which runs the risk of wrongly evaluating users' query privacy. In this paper, we address these issues by proposing a generic formal quantification framework,which comprehensively contemplate the various elements that influence the query privacy of users and explicitly states the knowledge that an adversary might have in the context of query privacy. Moreover, a way to model the adversary's attack on query privacy is proposed, which allows us to show the insufficiency of the existing query privacy metrics, e.g., k-anonymity. Thus we propose two new metrics: entropy anonymity and mutual information anonymity. Lastly, we run a set of experiments on datasets generated by network based generator of moving objects proposed by Thomas Brinkhoff. The results show the effectiveness and efficient of our framework to measure the LPPM.

Machine Learning-Based Reversible Chaotic Masking Method for User Privacy Protection in CCTV Environment

  • Jimin Ha;Jungho Kang;Jong Hyuk Park
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.767-777
    • /
    • 2023
  • In modern society, user privacy is emerging as an important issue as closed-circuit television (CCTV) systems increase rapidly in various public and private spaces. If CCTV cameras monitor sensitive areas or personal spaces, they can infringe on personal privacy. Someone's behavior patterns, sensitive information, residence, etc. can be exposed, and if the image data collected from CCTV is not properly protected, there can be a risk of data leakage by hackers or illegal accessors. This paper presents an innovative approach to "machine learning based reversible chaotic masking method for user privacy protection in CCTV environment." The proposed method was developed to protect an individual's identity within CCTV images while maintaining the usefulness of the data for surveillance and analysis purposes. This method utilizes a two-step process for user privacy. First, machine learning models are trained to accurately detect and locate human subjects within the CCTV frame. This model is designed to identify individuals accurately and robustly by leveraging state-of-the-art object detection techniques. When an individual is detected, reversible chaos masking technology is applied. This masking technique uses chaos maps to create complex patterns to hide individual facial features and identifiable characteristics. Above all, the generated mask can be reversibly applied and removed, allowing authorized users to access the original unmasking image.

Applied Method of Privacy Information Protection Mechanism in e-business environments (e-Business 환경 내 개인정보 보호 메커니즘적용 방안)

  • Hong, Seng-Phil;Jang, Hyun-Me
    • Journal of Internet Computing and Services
    • /
    • v.9 no.2
    • /
    • pp.51-59
    • /
    • 2008
  • As the innovative IT are being developed and applied in the e-business environment, firms are recognizing the fact that amount of customer information is providing care competitive edge. However, sensitive privacy information are abused and misused, and it is affecting the firms to require appropriate measures to protect privacy information and implement security techniques to safeguard carparate resources. This research analyzes the threat of privacy information exposure in the e-business environment, suggest the IPM-Trusted Privacy Policy Model in order to resolve the related problem, and examines 4 key mechanisms (CAM, SPM, RBAC Controller, OCM) focused on privacy protection. The model is analyzed and designed to enable access management and control by assigning user access rights based on privacy information policy and procedures in the e-business environment. Further, this research suggests practical use areas by applying TPM to CRM in e-business environment.

  • PDF

Collaborative Secure Decision Tree Training for Heart Disease Diagnosis in Internet of Medical Things

  • Gang Cheng;Hanlin Zhang;Jie Lin;Fanyu Kong;Leyun Yu
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.514-523
    • /
    • 2024
  • In the Internet of Medical Things, due to the sensitivity of medical information, data typically need to be retained locally. The training model of heart disease data can predict patients' physical health status effectively, thereby providing reliable disease information. It is crucial to make full use of multiple data sources in the Internet of Medical Things applications to improve model accuracy. As network communication speeds and computational capabilities continue to evolve, parties are storing data locally, and using privacy protection technology to exchange data in the communication process to construct models is receiving increasing attention. This shift toward secure and efficient data collaboration is expected to revolutionize computer modeling in the healthcare field by ensuring accuracy and privacy in the analysis of critical medical information. In this paper, we train and test a multiparty decision tree model for the Internet of Medical Things on a heart disease dataset to address the challenges associated with developing a practical and usable model while ensuring the protection of heart disease data. Experimental results demonstrate that the accuracy of our privacy protection method is as high as 93.24%, representing a difference of only 0.3% compared with a conventional plaintext algorithm.

A Study on the Factors Affecting the User Resistance in Social Network Service (Social Network Service에서의 사용자 저항에 영향을 미치는 요인에 관한 연구)

  • Park, Eunkyung;Choi, Jeongil;Yeon, Jiyoung
    • Journal of Korean Society for Quality Management
    • /
    • v.42 no.3
    • /
    • pp.387-406
    • /
    • 2014
  • Purpose: The widespread use of social network services (SNS) has caused users concern about the disclosure of their privacy or personal information. The purpose of this study is to analyze the factors of privacy concern and self presentation that affect the user resistance in the use of social network service. Methods: This study verifies the factors that affecting the user resistance in SNS. The research model suggested in this study is tested via a survey of 260 SNS users. SPSS and Smart PLS had been used to test the suggested hypotheses. Results: This study shows that privacy experience, privacy awareness, self esteem, and social desirability significantly influence perceived risk and that privacy awareness, self esteem, self efficacy, and perceived risk significantly influence perceived trust. It also verifies that perceived risk and perceived trust positively affect user resistance. Conclusion: This paper suggests that high awareness on privacy of SNS user encourages the SNS companies to consider the privacy protection mechanism for eliminating various factors that affecting the risk. This study also shows that the privacy calculus model applies to understanding the mechanism on resistance of SNS user.

How do multilevel privacy controls affect utility-privacy trade-offs when used in mobile applications?

  • Kim, Seung-Hyun;Ko, In-Young
    • ETRI Journal
    • /
    • v.40 no.6
    • /
    • pp.813-823
    • /
    • 2018
  • In existing mobile computing environments, users need to choose between their privacy and the services that they can receive from an application. However, existing mobile platforms do not allow users to perform such trade-offs in a fine-grained manner. In this study, we investigate whether users can effectively make utility-privacy trade-offs when they are provided with a multilevel privacy control method that allows them to recognize the different quality of service that they will receive from an application by limiting the disclosure of their private information in multiple levels. We designed a research model to observe users' utility-privacy trade-offs in accordance with the privacy control methods and other factors such as the trustworthiness of an application, quality level of private information, and users' privacy preferences. We conducted a user survey with 516 participants and found that, compared with the existing binary privacy controls, both the service utility and the privacy protection levels were significantly increased when the users used the multilevel privacy control method.

Implementation of Privacy Protection Policy Language and Module For Social Network Services (소셜 네트워크 서비스를 위한 프라이버시 보호 정책언어 및 프라이버시 보호 모듈 구현)

  • Kim, Ji-Hye;Lee, Hyung-Hyo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.1
    • /
    • pp.53-63
    • /
    • 2011
  • An SNS(Social Network Service) enables people to form a social network on online as in the real world. With the rising popularity of the service, side effects of SNSs were issued. Therefore we propose and implement a policy-based privacy protection module and access control policy language for ensuring the right of control of personal information and sharing data among SNSs. The policy language for protecting privacy is based on an attribute-based access control model which grants an access to personal information based on a user's attributes. The policy language and the privacy protection module proposed to give the right of control of personal information to the owner, they can be adopted to other application domains in which privacy protection is needed as well as secure sharing data among SNSs.

A Study of Split Learning Model to Protect Privacy (프라이버시 침해에 대응하는 분할 학습 모델 연구)

  • Ryu, Jihyeon;Won, Dongho;Lee, Youngsook
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.49-56
    • /
    • 2021
  • Recently, artificial intelligence is regarded as an essential technology in our society. In particular, the invasion of privacy in artificial intelligence has become a serious problem in modern society. Split learning, proposed at MIT in 2019 for privacy protection, is a type of federated learning technique that does not share any raw data. In this study, we studied a safe and accurate segmentation learning model using known differential privacy to safely manage data. In addition, we trained SVHN and GTSRB on a split learning model to which 15 different types of differential privacy are applied, and checked whether the learning is stable. By conducting a learning data extraction attack, a differential privacy budget that prevents attacks is quantitatively derived through MSE.

Privacy-Preservation Using Group Signature for Incentive Mechanisms in Mobile Crowd Sensing

  • Kim, Mihui;Park, Younghee;Dighe, Pankaj Balasaheb
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1036-1054
    • /
    • 2019
  • Recently, concomitant with a surge in numbers of Internet of Things (IoT) devices with various sensors, mobile crowdsensing (MCS) has provided a new business model for IoT. For example, a person can share road traffic pictures taken with their smartphone via a cloud computing system and the MCS data can provide benefits to other consumers. In this service model, to encourage people to actively engage in sensing activities and to voluntarily share their sensing data, providing appropriate incentives is very important. However, the sensing data from personal devices can be sensitive to privacy, and thus the privacy issue can suppress data sharing. Therefore, the development of an appropriate privacy protection system is essential for successful MCS. In this study, we address this problem due to the conflicting objectives of privacy preservation and incentive payment. We propose a privacy-preserving mechanism that protects identity and location privacy of sensing users through an on-demand incentive payment and group signatures methods. Subsequently, we apply the proposed mechanism to one example of MCS-an intelligent parking system-and demonstrate the feasibility and efficiency of our mechanism through emulation.

The improvement of Mangat Strategy in view of the protection of privacy

  • Ki-Hak Hong;Gi-Sung Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.2
    • /
    • pp.169-174
    • /
    • 1996
  • In the present paper an attempt has been made to improve the Mangat Strategy (1994) in view of the protection of privacy, variance and the range of .pi.. Conditions are obtained under which the proposed model is more efficient than those of warner (1965) and Mangat and Singh(1990).

  • PDF