• Title/Summary/Keyword: Privacy risk

Search Result 274, Processing Time 0.028 seconds

A case study of Privacy Impact Assessment for C-Shopping Mall (C쇼핑몰 개인정보 영향평가 사례연구)

  • Jeon, Dong-Jin;Jeong, Jin-Hong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.6
    • /
    • pp.73-82
    • /
    • 2012
  • This paper reviews Privacy Impact Assessments in order to perform preventing and diagnosis against potential threats focused on the C-Shopping mall case. The quality of protection in C-shopping mall shows that the corporations itself is 29.2, the system is 68.8, the life cycle of the privacy is 25.5 and CCTV is 60.0. The lowest levels are the corporation's management 16.7, the life-cycle's saving and keeping 12.5, usage and offer 11.5 and destruction 16.7 among the life cycle of the privacy. The result of risk analysis shows that the highest levels are saving and keeping 13.3 and destruction 13.0. From the result, dangerous duplications are saving and keeping and destructions.

The Effect of Perceived Risk, Hedonic Value, andSelf-Construal on Attitude toward Mobile SNS

  • Kim, Ji Yoon;Kim, Sang Yong
    • Asia Marketing Journal
    • /
    • v.16 no.1
    • /
    • pp.149-168
    • /
    • 2014
  • This study investigates the effect of perceived risk on attitude toward mobile Social Network Services (SNSs). First, we understand that perceived risk of SNSs is a multidimensional concept, and we study the relationship between attitude and perceived risk such as social risk, performance risk, and privacy risk in SNS environments. Subsequently, the relationships between these multidimensional concepts of perceived risk and attitude are investigated. The result indicates that social, performance, and privacy risk have negative effects on attitude. In addition, the moderated effect of individual characteristic variables such as hedonic value and self-construal are confirmed as mitigating factors that alleviate the negative impact of perceived risk. The Findings show that customers who perceive SNSs to be risky are more likely to have a negative attitude toward SNSs. However, the negative impact of perceived risk on their attitude toward SNSs is alleviated in customers with high hedonic value. Similarly, the negative impact of perceived risk on their attitude toward SNS is weaker with customers in interdependent self-construal. This paper presents effective segmentation variables, such as consumer's motivation (hedonic value) and psychological variable (self-construal), which mitigate the risk perception of customers. Therefore, it provides practical guidelines for the marketing managers in terms of who to target and what kind of strategies to implement in terms of these segmentation variables to approach consumers more efficiently.

  • PDF

Standard Implementation for Privacy Framework and Privacy Reference Architecture for Protecting Personally Identifiable Information

  • Shin, Yong-Nyuo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.197-203
    • /
    • 2011
  • Personal Identifiable Information (PII) is considered information that identifies or can be used to identify, contact, or locate a person to whom such information pertains or that is or might be linked to a natural person directly or indirectly. In order to recognize such data processed within information and communication technologies such as PII, it should be determined at which stage the information identifies, or can be associated with, an individual. For this, there has been ongoing research for privacy protection mechanism to protect PII, which now becomes one of hot issues in the International Standard as privacy framework and privacy reference architecture. Data processing flow models should be developed as an integral component of privacy risk assessments. Such diagrams are also the basis for categorizing PII. The data processing flow may not only show areas where the PII has a certain level of sensitivity or importance and, as a consequence, requires the implementation of stronger safeguarding measures. This paper propose a standard format for satisfying the ISO/IEC 29100 "Privacy Framework" and shows an implementation example for privacy reference architecture implementing privacy controls for the processing of PII in information and communication technology.

Differences in Privacy-Protective Behaviors by Internet Users in Korea and China (인터넷 사용자의 개인정보보호 행동의 차이에 관한 연구)

  • Zhang, Chao;Wan, Lili;Min, Dai-Hwan;Rim, Seong-Taek
    • Journal of Information Technology Services
    • /
    • v.11 no.1
    • /
    • pp.93-107
    • /
    • 2012
  • Privacy-protective behavior can be classified into passive behavior and active behavior. Passive behavior includes refusal, misrepresentation, and removal, while word-of-mouth, complaint, and seeking for help belong to active behavior. Internet users in different countries may take different types of privacy-protective behavior because of cultural and social differences. This study analyzes the differences in Internet users' privacy-protective behavior between Korea and China. Korean Internet users take refusal, complaint, and seeking to protect their privacy information, while misrepresentation is not an option for Korean Internet users. Chinese Internet users take refusal, complaint, seeking, and misrepresentation to protect their privacy information. In Korea, passive behavior (refusal) is chosen more often than active behavior (complaint and seeking for help), while in China active behavior(complaint and seeking for help) is preferred to passive behavior (refusal and misrepresentation). The differences of privacy-protective behavior in the two countries may provide some implications for online companies, if they want to avoid the business risk due to privacy concerns and to take appropriate steps to deal with privacy-protective behavior by Internet users.

A Study on Privacy Attitude and Protection Intent of MyData Users: The Effect of Privacy cynicism (마이데이터 이용자의 프라이버시 태도와 보호의도에 관한 연구: 프라이버시 냉소주의의 영향)

  • Jung, Hae-Jin;Lee, Jin-Hyuk
    • Informatization Policy
    • /
    • v.29 no.2
    • /
    • pp.37-65
    • /
    • 2022
  • This article analyzes the relationship between the privacy attitudes of MyData users and the four dimensions of privacy cynicism (distrust, uncertainty, powerlessness, and resignation) as to privacy protection intentions through a structural equation model. It was examined that MyData user's internet skills had a statistically significant negative effect on 'resignation' among the privacy cynicism dimensions. Secondly, privacy risks have a positive effect on 'distrust' in MyData operators, 'uncertainty' in privacy control, and 'powerlessness' in terms of privacy cynicism. Thirdly, it was analyzed that privacy concerns have a positive effect on the privacy cynicism dimensions of 'distrust' and 'uncertainty', with 'resignation' showing a negative effect. Fourthly, it was found that only 'resignation' as a dimension of privacy cynicism showed a negative effect on privacy protection intention. Overall, MyData user's internet skills was analyzed as a variable that could alleviate privacy cynicism. Privacy risks are a variable that reinforces privacy cynicism, and privacy concerns reinforce privacy cynicism. In terms of privacy cynicism, 'resignation' offsets privacy concerns and lowers privacy protection intentions.

Differential Privacy in Practice

  • Nguyen, Hiep H.;Kim, Jong;Kim, Yoonho
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.3
    • /
    • pp.177-186
    • /
    • 2013
  • We briefly review the problem of statistical disclosure control under differential privacy model, which entails a formal and ad omnia privacy guarantee separating the utility of the database and the risk due to individual participation. It has born fruitful results over the past ten years, both in theoretical connections to other fields and in practical applications to real-life datasets. Promises of differential privacy help to relieve concerns of privacy loss, which hinder the release of community-valuable data. This paper covers main ideas behind differential privacy, its interactive versus non-interactive settings, perturbation mechanisms, and typical applications found in recent research.

A Cache Privacy Protection Mechanism based on Dynamic Address Mapping in Named Data Networking

  • Zhu, Yi;Kang, Haohao;Huang, Ruhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6123-6138
    • /
    • 2018
  • Named data networking (NDN) is a new network architecture designed for next generation Internet. Router-side content caching is one of the key features in NDN, which can reduce redundant transmission, accelerate content distribution and alleviate congestion. However, several security problems are introduced as well. One important security risk is cache privacy leakage. By measuring the content retrieve time, adversary can infer its neighbor users' hobby for privacy content. Focusing on this problem, we propose a cache privacy protection mechanism (named as CPPM-DAM) to identify legitimate user and adversary using Bloom filter. An optimization for storage cost is further provided to make this mechanism more practical. The simulation results of ndnSIM show that CPPM-DAM can effectively protect cache privacy.

Privacy-Constrained Relational Data Perturbation: An Empirical Evaluation

  • Deokyeon Jang;Minsoo Kim;Yon Dohn Chung
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.524-534
    • /
    • 2024
  • The release of relational data containing personal sensitive information poses a significant risk of privacy breaches. To preserve privacy while publishing such data, it is important to implement techniques that ensure protection of sensitive information. One popular technique used for this purpose is data perturbation, which is popularly used for privacy-preserving data release due to its simplicity and efficiency. However, the data perturbation has some limitations that prevent its practical application. As such, it is necessary to propose alternative solutions to overcome these limitations. In this study, we propose a novel approach to preserve privacy in the release of relational data containing personal sensitive information. This approach addresses an intuitive, syntactic privacy criterion for data perturbation and two perturbation methods for relational data release. Through experiments with synthetic and real data, we evaluate the performance of our methods.

A Study on the User Experience according to the Existence of Explanation Facilities and Individuals Privacy Concern Level (대화형 에이전트의 설명 기능과 프라이버시 염려 수준에 따른 사용자 경험 차이에 관한 연구)

  • Kang, Chan-Young;Choi, Kee-Eun;Kang, Hyun-Min
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.203-214
    • /
    • 2020
  • Nowadays, smart speakers are increasingly personalized and serve as recommendation agents for user. The aim of this study is find out effects of 'Explanation facilities' on transparency, perceived trust, user satisfaction, behavioral intentions of users to reuse, privacy risk, and quality of recommendation in the context of an interact with smart speaker's conversational agents. And we also use measurement for level of privacy concerns to see individuals's level of privacy concerns affected the assessment. The result of this study as follow; First, all measurement variable are significantly related to 'Explanation facilities' Second, perceived trust, privacy risk are significantly related to individual's level of privacy concern. This study found that 'Explanation facilities' could be applied in context of smart speaker and possibility of cognitive dissonance according to the level of privacy concerns.

A Framework for measuring query privacy in Location-based Service

  • Zhang, Xuejun;Gui, Xiaolin;Tian, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1717-1732
    • /
    • 2015
  • The widespread use of location-based services (LBSs), which allows untrusted service provider to collect large number of user request records, leads to serious privacy concerns. In response to these issues, a number of LBS privacy protection mechanisms (LPPMs) have been recently proposed. However, the evaluation of these LPPMs usually disregards the background knowledge that the adversary may possess about users' contextual information, which runs the risk of wrongly evaluating users' query privacy. In this paper, we address these issues by proposing a generic formal quantification framework,which comprehensively contemplate the various elements that influence the query privacy of users and explicitly states the knowledge that an adversary might have in the context of query privacy. Moreover, a way to model the adversary's attack on query privacy is proposed, which allows us to show the insufficiency of the existing query privacy metrics, e.g., k-anonymity. Thus we propose two new metrics: entropy anonymity and mutual information anonymity. Lastly, we run a set of experiments on datasets generated by network based generator of moving objects proposed by Thomas Brinkhoff. The results show the effectiveness and efficient of our framework to measure the LPPM.