DOI QR코드

DOI QR Code

TCA: A Trusted Collaborative Anonymity Construction Scheme for Location Privacy Protection in VANETs

  • Zhang, Wenbo (Xi'an University of Posts & Telecommunications) ;
  • Chen, Lin (Xi'an University of Posts & Telecommunications) ;
  • Su, Hengtao (Xi'an University of Posts & Telecommunications) ;
  • Wang, Yin (Xi'an University of Posts & Telecommunications) ;
  • Feng, Jingyu (Xi'an University of Posts & Telecommunications)
  • 투고 : 2022.01.21
  • 심사 : 2022.10.05
  • 발행 : 2022.10.31

초록

As location-based services (LBS) are widely used in vehicular ad-hoc networks (VANETs), location privacy has become an utmost concern. Spatial cloaking is a popular location privacy protection approach, which uses a cloaking area containing k-1 collaborative vehicles (CVs) to replace the real location of the requested vehicle (RV). However, all CVs are assumed as honest in k-anonymity, and thus giving opportunities for dishonest CVs to submit false location information during the cloaking area construction. Attackers could exploit dishonest CVs' false location information to speculate the real location of RV. To suppress this threat, an edge-assisted Trusted Collaborative Anonymity construction scheme called TCA is proposed with trust mechanism. From the design idea of trusted observations within variable radius r, the trust value is not only utilized to select honest CVs to construct a cloaking area by restricting r's search range but also used to verify false location information from dishonest CVs. In order to obtain the variable radius r of searching CVs, a multiple linear regression model is established based on the privacy level and service quality of RV. By using the above approaches, the trust relationship among vehicles can be predicted, and the most suitable CVs can be selected according to RV's preference, so as to construct the trusted cloaking area. Moreover, to deal with the massive trust value calculation brought by large quantities of LBS requests, edge computing is employed during the trust evaluation. The performance analysis indicates that the malicious response of TCA is only 22% of the collaborative anonymity construction scheme without trust mechanism, and the location privacy leakage is about 32% of the traditional Enhanced Location Privacy Preserving (ELPP) scheme.

키워드

1. Introduction

As a significant component of Intelligent Transportation Systems (ITS), vehicular ad-hoc networks (VANETs) can acquire traffic circumstances and vehicle conditions to reduce traffic accidents and improve traffic efficiency [1-3]. The system model of VANETs is shown in Fig. 1. By deploying Roadside Units (RSUs) around the road and installing an On-board Unit device (OBU) on vehicles, the vehicles in VANETs can share driving parameters and roads information with other vehicles and nearby RSUs. Due to the high-performance, scalable, and reliable data centers of the cloud, these unprecedented amounts of data can be managed and stored through the cloud server in vehicular networks [4].

E1KOBZ_2022_v16n10_3438_f0001.png 이미지

Fig. 1. The system model of VANETs.

Recently, the Location-based Services (LBS) of vehicular networks have developed rapidly with the progress of GPS positioning technology [5]. By sending location and service request messages, autonomous vehicles can enjoy various LBSs provided by the Location Service Provider (LSP). For instance, autonomous vehicles can get relevant location information of parking lots, gas stations, shopping centers, hotels, and restaurants within a certain range. However, the convenience of LBS also brings the risk of privacy disclosure. For example, attackers collect location information of a specific vehicle to infer its sensitive information, such as residence, workplace, and living habits [6], and even more bring potential risks to property and personal safety by destroying the automatic driving system. It is feasible that maintain an acceptable level of service degradation for autonomous vehicular networks in the presence of location privacy threats [7].

Aiming at these problems, a lot of work has been done on location privacy protection. As one of the most popular approaches for vehicles’ location privacy, spatial cloaking employs a trusted third-party (named anonymizer) [8-10] which expands the real location of the Request Vehicle (RV) to a cloaking area. When requesting LBS, the RV’s real location is replaced by a cloaking area that contains other k-1 Collaborative Vehicles (CVs). With a cloaking area, attackers are difficult to distinguish the RV from the other k-1 CVs. Furthermore, compared with other solutions, spatial cloaking is able to provide precise query results without any requirement for complicated cryptographic technologies.

However, most spatial cloaking schemes assume that CVs always tell the truth, and fail to consider location cheating behaviors during the construction of the cloaking area. The high mobility and volatility characteristics of VANETs make adjacent vehicles usually unknown to each other [11]. This may offer opportunities for dishonest CVs to provide false location information, which leads that the cloaking area cannot satisfy the privacy protection requirement of RV. More seriously, attackers could exploit dishonest CVs’ false location information to speculate the real location of the RV. Hence, how to efficiently defend against location cheating behaviors launched by dishonest CVs has become a challenging issue to achieve better spatial cloaking.

In this paper, an edge-assisted Trusted Collaborative Anonymity construction scheme with a trust mechanism called TCA is proposed for protecting the location privacy of vehicles against location cheating behaviors launched by dishonest CVs. The main contributions are described as follows.

• Introduce the design idea of trusted observations within variable radius r to verify the location information of CVs. When a CV sends location information during the construction of the cloaking area, the trusted third-party anonymizer can broadcast verification needs to his adjacent vehicles called Observer Vehicles (OVs) to require trusted observations to check whether the distance between the CV and OVs is within a radius r.

• Design an edge-assisted trust value prediction method to support trusted observations. The Collaborative Trust (CT) value and Observer Trust (OT) value are evaluated respectively. Both CT value and OT value can be incorporated into a Synthetical Trust (ST) value. With the three types of trust value, three trust judgment strategies can be adopted to support trusted observations. To deal with the massive trust value calculation demand brought by large quantities of LBS requests, edge computing is employed during the trust evaluation.

• Establish a multiple linear regression model to calculate the variable radius r based on the privacy level and service quality of RV. The radius r can be dynamically adjusted in the light of vehicles' regional activity. The radius r is used to verify CVs’ location verification on the basis of trusted observations. By restricting r’s search range, trust value can also be utilized to predict the trust relationship between RV and CVs, and construct the trusted cloaking area.

The organization of this paper is as follows. In Section 2, preliminaries related to privacy protection, trust mechanism, and edge computing are described. We propose the TCA scheme in section 3 and analyze the security in Section 4. In Section 5, the simulation analysis of the TCA scheme is performed. Finally, we conclude the paper in Section 6.

2. Related Works

2.1 Spatial Cloaking Location Privacy Protection

Since spatial cloaking was first introduced by Grunwald and Gruteser [12], many approaches had been proposed to protect location privacy. In [13], by converting trajectories into location points, a virtual trajectory generation algorithm was proposed to protect vehicles’ privacy. In [14], the works related to identity and location privacy threatening factors, problems, and solutions were studied. The research indicated most of the solutions use pseudonym identity or address changing scheme to protect identity privacy. In [15], a pseudonym changing strategy based on the Vehicular Location Privacy Zone (VLPZ) was proposed to address pseudonym changing strategies issues, in which the RSUs were considered as typical places to implement a VLPZ. In [16], in order to overcome the weakening of the effectiveness of location obfuscation when users move and recurrently request for LBS, a k-anonymity privacy protection scheme DUMMY-Q based on query perturbation was proposed. In DUMMY-Q, query privacy can be protected even if the user's identity is exposed. In [17], an Attribute-based k-Anonymous collaborative scheme (ABAKA) was proposed. To obtain the k-1 collaborative users, ABAKA provided p-sensitive as well as k-anonymity. In [18], based on the user’s location privacy preference model, a Dynamically Adjustable k-anonymous (DAK) algorithm was designed to determine the privacy protection strength in different contexts. The DAK selected one anonymous group of adjacent vehicles to construct the cloaking area, and obtain the requested vehicle’s dummy location. In [19], by employing a function generator used to generate the transforming parameters, an Enhanced Location Privacy-Preserving (ELPP) scheme was first proposed. Without the parameters, the anonymizer knows nothing about the user’s real location. In [20], a distributed social-aware location-privacy protection (SLP) protocol which developed three strategies can conceal the original sender among the insecure vehicular networks without the help of a trusted third party. To stimulate users participating in the cloaking area construction. [21] proposed a k-anonymity location privacy scheme based on an auction mechanism. To achieve trajectory k-anonymity privacy protection, [22] proposed Location Recombination Mechanism (LRM) which could generate k-1 fake trajectories similar to base trajectories in terms of geographical features and probabilistic features.

However, a trust mechanism is not introduced in these existing works, which fails to consider location cheating behaviors during the cloaking area construction. The attackers may infer the true location of the RV when they are applied directly to the VANETs.

2.2 Trust Mechanism

Trust mechanism can play an important role in VANETs, which enables the vehicle to judge whether the received message is trustworthy, and provides the basis for the punishments or rewards on specific vehicles.

Considerable researches have been done on trust evaluation. In [23], based on the verification of the behavior of the vehicle on the network, the vehicle periodically contacted the certification and traffic management agency to update its trust value. In [24], a reliable reputation system was designed. If the agent vehicle completes the task honestly, a virtual check is awarded. The check can be used to encourage cooperative vehicles and punish malicious vehicles. In [25], a reputation model was proposed to verify the trustworthiness of messages sent in VANETs. In this model, each vehicle requested the vehicles within its communication range to express opinions about the sending vehicle’s trustworthiness. In [26], a novel decentralized trust management scheme for vehicles was proposed, in which all RSUs can participate in the trust value updating process. The trust information of all the vehicles can also be provided to all RSUs in this scheme.

In the trust mechanism, the beta function is one of the most popular designs. In the beta function, the number of trusted and untrusted behaviors a vehicle performed is calculated first. Then, the trust value with the beta function will be denoted by Beta(α, β) [27].

\(\begin{aligned}\operatorname{Beta}(\alpha, \beta)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1}\\\end{aligned}\)       (1)

Where θ is the probability of vehicles' behaviors, 0 ≤ θ ≤ 1, α > 0, β > 0.

Take an example for the i-th vehicle (Vi), and unti denote the number of trusted and untrusted behaviors conducted by Vi respectively. When the message is trusted, α = trui + 1, otherwise β = unti + 1. Thus, the trust value of Vi is calculated as:

Ti = Beta(trui + 1, unti + 1)       (2)

The expectation value can be derived as E[Beta(α, β)] = α / (α+β). To make the maximum trust value of Vi tend to 1, the Ti can be further described as follows:

\(\begin{aligned}T_{i}=\frac{1+t r u_{i}}{1+\operatorname{tru}_{i}+u n t_{i}}\\\end{aligned}\)       (3)

where 0 ≤ Ti ≤ 1. When trui ≥ 1 and unti = 0, Ti is always equal to 1. For Vi, the more often it sends trusted messages, the higher the trust value will be.

3. Our Proposed TCA Scheme

Based on the trust mechanism, an edge-assisted trusted collaborative anonymity construction scheme (TCA) is proposed in this paper to protect the location privacy of vehicles. Specifically, the TCA scheme consists of three modules: edge-assisted trust value evaluation, trusted observation within variable radius r, and trusted cloaking area construction. In this section, we first describe the design architecture of the TCA scheme and then introduce the three modules. Finally, we give the execution strategies of TCA.

3.1 Architecture Overview

The architecture of the TCA scheme is present in this section, including network architecture and system architecture. As shown in Fig. 2, the network architecture defines the components and functions contained in the TCA scheme and consists of three layers.

E1KOBZ_2022_v16n10_3438_f0002.png 이미지

Fig. 2. Network architecture of the TCA scheme.

• Cloud Layer — Due to the features of the power storage capacity and wide coverage, the cloud layer is responsible for managing trust-related data, such as historical location information, collaborator behaviors, observer behaviors, and so on.

• Edge Layer — With rich communication, calculation, and storage capabilities, RSUs are good candidates for “edge nodes”. They can be deployed as the intermediate nodes between the cloud and vehicles to assist the data aggregation and trust value evaluation.

• User Layer — This layer consists of RV, CV, and OV. A vehicle can request LBS as a RV, help other vehicles as a CV during cloaking area construction, or play the role of OV to verify the location information from CVs.

Specifically, the TCA scheme consists of three successive modules as shown in Fig. 3:

E1KOBZ_2022_v16n10_3438_f0003.png 이미지

Fig. 3. System architecture of the TCA scheme.

1) Edge-assisted trust value evaluation. In this module, the trust values of vehicles are predicted rapidly with the assistance of RSUs. The trust value is predicted according to the historical behavior of vehicles. If vehicles always submit a true location, they will reward a higher trust value; otherwise, if vehicles always submit a false location, their trust value will be lower than the former.

2) Trusted observation within variable radius r. The radius r will be dynamically changed through the established multivariate linear model based on the privacy level and service quality of RV.

3) Trusted cloaking area construction. Combining trust value and radius r of vehicles, the appropriate CVs for RV are selected within this module which is used to replace RV’s real location information during sending LBS requests to LSP.

3.2 Edge-assisted Trust Value Evaluation

In the process of cloaking area construction, trust value evaluation occurs in the following two cases.

• Trust value evaluation is employed to differentiate dishonest CVs and honest CVs. The trust value is evaluated according to the historical behavior of vehicles. Dishonest CVs can be predicted using the trust value; for example, when compared to honest CVs, dishonest CVs usually report false location and reward a lower trust value.

• After receiving the location information from CVs, the anonymizer should broadcast verification requirements to the adjacent vehicles called OVs. Then, the OVs check whether the distance between a CV and OVs is within variable radius r. To ensure the verification credibility and improve the verification efficiency, the trust value evaluation related to OVs should also be involved.

However, with the growing number of LBS requests, a massive trust evaluation may be required. Hence, edge computing is potentially a good approach to assist the evaluation of trust values, in which RSUs can be deployed as the edge nodes.

The detailed trust value evaluation process is as follows. Take the i-th vehicle (Vi) for example. The two-tuple (fi, ti) represent the amount of false and true location reported by Vi. The trust value with a role of CV is evaluated as:

\(\begin{aligned}C T_{i}=\left\{\begin{array}{ll}\frac{1+t_{i}}{f_{i}+t_{i}+1}-\delta \cdot \frac{f_{i}}{t_{i}+1}, & f_{i}<t_{i} \\ 0, & f_{i} \geq t_{i}\end{array}\right.\\\end{aligned}\)       (4)

where 0<δ<1 is the threshold of trust value. If fi = 0, CTi is equal to 1, denoting Vi behaves honestly. If fi > 0, CTi will decrease sharply as fi increases.

Likewise, the two-tuple (wi, ci) denote the amount of wrong and correct observations submitted by Vi. The trust value with a role of OV is evaluated as:

\(\begin{aligned}O T_{i}=\left\{\begin{array}{ll}\frac{1+c_{i}}{w_{i}+c_{i}+1}-\delta \cdot \frac{w_{i}}{c_{i}+1}, & w_{i}<c_{i}, \\ 0, & w_{i} \geq c_{i} .\end{array}\right.\\\end{aligned}\)       (5)

If OTi ≥ δ, Vi’s observation will be adopted by the anonymizer when Vi plays the role of OV. Synthesizing CTi and OTi, the comprehensive trust value is evaluated as:

\(\begin{aligned}S T_{i}=\left\{\begin{array}{lr}\frac{1+t_{i}+c_{i}}{f_{i}+t_{i}+w_{i}+c_{i}+1}-\delta \cdot \frac{f_{i}+w_{i}}{t_{i}+c_{i}+1}, & f_{i}+w_{i}<t_{i}+c_{i}, \\ 0, & f_{i}+w_{i} \geq t_{i}+c_{i} .\end{array}\right.\\\end{aligned}\)      (6)

To help the anonymizer to make a rapid verification decision, three trust judgements can be adopted.

-J1: Vi’s location information will be accepted directly when CTi = 1.

-J2: Vi’s location information will be accepted by the anonymizer, and invoke the process of trusted observations within radius r when CTi ≥ δ.

-J3: Vi’s location information will be rejected when CTi < δ or STi < δ.

According to the predicted trust value, the trusted CVs can be selected to participate in LBS request.

3.3 Trusted Observation within Variable Radius r

In this paper, the variable radius r can be used to meet the personalized requirements of RV during the cloaking area construction. The privacy protection level (lp) and service quality level (lq) should be taken into account. Generally, the larger r is, the more dispersed the distribution of the vehicle in the constructed cloaking area will be, and the higher lp will be. But, the lq of RV cannot be guaranteed. On the contrary, the smaller r is, the lower lp will be. But RV can get a higher lq.

Since r changes along with lp and lq, there is a certain mapping relationship among them. We can build the multiple linear regression function to predict r from lp and lq, which is calculated as follows:

\(\begin{aligned}r=f\left(l_{p}, l_{q}\right)=\alpha_{0}+\alpha_{1} l_{p}+\frac{\alpha_{2}}{l_{q}}+\varepsilon\\\end{aligned}\)       (7)

where α0 is the regression constant, α1, α2 is the regression coefficient, ε is a random error term.

With the least square method, the influence of lp and lq on r is calculated as follows:

\(\begin{aligned}y=\alpha_{0}+\alpha_{1} l_{p}+\frac{\alpha_{2}}{l_{q}}+\varepsilon-r\\\end{aligned}\)        (8)

When \(\begin{aligned}\alpha_{0}=\hat{\alpha}_{0}, \alpha_{1}=\hat{\alpha}_{1}, \alpha_{2}=\hat{\alpha}_{2}, \alpha_{0}, \alpha_{1}, \alpha_{2}\\\end{aligned}\) can be obtained if the (9) reaches the minimum.

\(\begin{aligned}G=\sum_{o=1}^{2}\left(y_{o}-\alpha_{0}-\alpha_{1} l_{p o}-\frac{\alpha_{2}}{l_{q o}}\right)^{2}\\\end{aligned}\)       (9)

By taking the partial derivative of the above equation and setting it equal to 0, the normal equations can be obtained as follows:

\(\begin{aligned}\left\{\begin{array}{l}\frac{\partial G}{\partial \alpha_{0}}=-2 \sum_{o=1}^{2}\left(y_{o}-\alpha_{0}-\alpha_{1} l_{p o}-\frac{\alpha_{2}}{l_{q o}}\right)=0, \\ \frac{\partial G}{\partial \alpha_{1}}=-2 \sum_{o=1}^{2}\left(y_{o}-\alpha_{0}-\alpha_{1} l_{p o}-\frac{\alpha_{2}}{l_{q o}}\right) l_{p o}=0, \\ \frac{\partial G}{\partial \alpha_{2}}=-2 \sum_{o=1}^{2}\left(y_{o}-\alpha_{0}-\alpha_{1} l_{p o}-\frac{\alpha_{2}}{l_{q o}}\right) \frac{1}{l_{q o}}=0 .\end{array}\right.\\\end{aligned}\)       (10)

The above equation can be simplified as the matrix form, where

\(\begin{aligned}\boldsymbol{L}=\left[\begin{array}{lll}1 & l_{p 1} & l_{p 2} \\ 1 & l_{q 1} & l_{q 2}\end{array}\right] \boldsymbol{\alpha}=\left[\begin{array}{l}\alpha_{0} \\ \alpha_{1} \\ \alpha_{2}\end{array}\right] \quad \boldsymbol{Y}=\left[\begin{array}{l}y_{1} \\ y_{2}\end{array}\right]\\\end{aligned}\)       (11)

when (LT L)-1 exists, α = (LT L)-1 LT R can be obtained. Finally, RV can define the error in (7) according to practical experience and get the optimal range of r.

Algorithm 1 Detect false location information

Input: li, Θ, r, vij, and d

Output: result

1: Initialize result = 0.

2: for each Vi ∈ Θ do

3: if CTi ≥ δ then

4: if d > r and vij = 0 then

5: result = 0. It means location information li is false.

6: end if

7: if d ≤ r and vij = 1 then

8: result = 1 which means li is true.

9: end if

10: end if

11: if CTi < δ then

12: result = 0 which means li is false.

13: end if

14: end for

The architecture of trusted observation within variable radius r is shown as Fig. 4, and the process will be invoked under the condition CTi ≥ δ. As shown in Fig. 5, this process can be executed with four steps.

E1KOBZ_2022_v16n10_3438_f0004.png 이미지

Fig. 4. Architecture of trusted observation within variable radius r.

E1KOBZ_2022_v16n10_3438_f0005.png 이미지

Fig. 5. Execution strategies of trusted observation within variable radius r.

Step 1. CV→AN: M[i] = Epk( PIDi || li || Sigsi (li) )

A CV (e.g. Vi) sends anonymizer (abbreviated as AN) a message M[i] to obtain judgement value, where li is Vi’s location information, pk is anonymizer’s public key, si is Vi’s private key, PIDi indicates the pseudonym of Vi.

Step 2. AN→CV: QH = Epi(ji || r)

AN→OVs: VNs = Epj(PIDi || li || Sigsk(li) || ttli || r)

The anonymizer decrypts M[i] with its private key sk and verifies with Vi’s public key.If the verification is valid, the anonymizer sends the judgement value ji to Vi.

• For CTi < δ or STi < δ, ji = 0. In this case, the anonymizer rejects Vi’s location information.

• For CTi ≥ δ, ji = 1. In this case, the anonymizer broadcasts verification requirements VNs to the adjacent vehicles (OVs) centered on Vi within the variable radius r, where ttli is the valid time. All OVs must return observations within the valid time. r can be dynamically adjusted in the light of vehicular regional activity. If there are more vehicles in a vehicular region, the radius r is assigned a smaller value and vice versa.

Step 3. OVs→AN: VO[j] = Epk(PIDj || ttli || r)

The OV (e.g. Vj), assumed as one of OVs, returns an observation result VO[j] = Epk(PIDj || ttli || r) to the anonymizer. The anonymizer receives the observation VO[j] and measures the distance by the strength of the received signal. The distance remark is denoted as d. If d ≤ r, it means that OV is within the range, and d > r means not. The verification result of the anonymizer is denoted as vij. For d > r, vij = 0, it means li is false location information. For d ≤ r, vij = 1, it means li is real location information.

Step 4. AN→CV: FD = Esi(ui)

If vij = 1, the anonymizer gives the final notification FD. Here, ui is the final decision of the anonymizer and can be given under the (12).

If ui = 1, Vi can get permission from the anonymizer to participate in the cloaking area construction, and vice versa. It is noticed that ui is based on the majority rule. Let n denote the number of OVs and ej is the trusted weight of Vj. If OTj ≥ δ, ej = 1, otherwise, if OTj < δ, ej = 0.

\(\begin{aligned}u_{i}=\left\{\begin{array}{l}1, \sum_{j=1}^{n} e_{j} \cdot v_{i j}>\frac{n}{2}, \\ 0,\sum_{j=1}^{n} e_{j} \cdot v_{i j} \leq \frac{n}{2}.\end{array}\right.\\\end{aligned}\)       (12)

Depending on the tuples (r, d, vij), Algorithm 1 is designed to detect whether li is false or not, where d is the estimated distance between Vi and Vj, vij is the verification result of Vj, and Θ denotes all the vehicles within the variable radius r.

Algorithm 1 Detect false location information

3.4 Trusted Cloaking Area Construction

To protect RV’s real location, the goal is to construct a trusted cloaking area with CVs. Firstly, a candidate set of CVs should be selected through the trust relationship between RV and CVs. We estimate the trust relationship by combining the trust evaluation mechanism and trusted observation with variable radius r method. If the CV’s trust value is higher than δ and meets the RV’s privacy requirements, it means that the trust relationship between CV and RV is good and the CV could be selected for constructing a cloaking area. The process of trusted cloaking area construction is shown in Fig. 6.

E1KOBZ_2022_v16n10_3438_f0006.png 이미지

Fig. 6. Process of the trusted cloaking area construction.

Here, Ω denotes the candidate set of CVs. According to the trust value, the anonymizer selects k-1 honest CVs from Ω within the variable radius r. kRV denotes the anonymity of RV. Ѱ indicates the set of CVs that can be used to construct the cloaking area. dmin and dmax respectively denote the minimum and maximum range that needs to be reached in the cloaking area construction. To meet personalized requirements, RV sets dmin and dmax based on practical experience. If the variable radius r is out of this range, r should be adjusted according to the following rules.

Rule 1: For r ≤ dmin, the distance between CVs and RV is small. It may lead to privacy leakage. Therefore, rshould be recalculated as r = r + (dmax-dmin) / 2.

Rule 2: For r > dmax, the distance between CVs and RV is large. It may result in inaccurate service, so r should be set as r = dmax.

Finally, the anonymous determines whether the cloaking area is successfully constructed by judging the following constraints condition.

kRV ≥ k -1 && dmin ≤ r ≤ dmax       (13)

If (13) holds, the anonymizer constructs the trusted cloaking area successfully. Otherwise, the anonymizer adds new CVs according to the rule r = r + (k – 1 – g) min (|∆1|, |∆2|), where g denotes the number of new CVs, ∆1 = r – dmax and ∆2 = r – dmin. The construction of a trusted cloaking area is successful until kRV + g ≥ k – 1 is satisfied.

3.5 Execution Strategies

By introducing the TCA scheme, we can protect the location privacy of vehicles against location cheating behaviors launched by dishonest CVs with six execution strategies, as shown in Fig. 7. The execution strategies can be performed as follows.

E1KOBZ_2022_v16n10_3438_f0007.png 이미지

Fig. 7. Execution strategies of the TCA scheme.

Step 1. RV→AN: R1 = Epk (IDRSU || PID || l || Sigsv(l) || k}

The RV sends a request to the trusted third-party anonymizer (abbreviated as AN), where pk is the anonymizer’s public key, IDRSU is the RSU closest to RV, sv, l, and k are V’s private key, location, and anonymity degree, respectively. To ensure the security of LBS acquisition, a vehicle V needs to register first and apply the irreversible hash function to convert its identity ID into a pseudonym PID = H(ID).

Step 2. AN→RSU: R2 = EPKRSU(PID || l || Sigsv(l) || k)

The anonymizer uses its private key sk to decrypt the request message and verify whether Sigsv(l) is a legal signature on l. If not, the anonymizer rejects the request message. Otherwise, the anonymizer sends the trust value request message R2 to RSU, where PKRSU is the public key of RSU.

Step 3. RSU→AN: Q1 = Epk (PID || TD || Ω || per|| k)

RSU sends trust-related data aggregation to the anonymizer. Here, Ω is the set of candidates of CVs, and TD represents the trust value of RV estimated by RSU. TD ≥ δ means that the permission is gained, and TD < δ or per=0 means that the permission is rejected, where 0<δ<1 is the threshold of trust value.

Step 4.AN→RV: Q2 = Epv (PID || per)

RV→AN: R3 = {PID, Epk (Epl(rc))}

If per=1, anonymizer sends response message to RV, where pv is the public key of RV. Then RV sends message R3 to the anonymizer. Here, rc is the requested content to LSP, pl is LSP’s public key.

Step 5. AN→LSP: R4 = {PID, Epl(rc), Ѱ}

The anonymizer sends message R4 to LSP, where Ѱ is the set of CVs for constructing the cloaking area. The CVs in Ѱ can take place of RV during LBS requests.

Step 6. LSP→AN: Q3 = {PID, Epk(sr, Sigsl(sr))}

AN→RV: Q4 = {PID, Epv(sr, Sigsk(sr))}

LSP obtains rc by decrypting R4 with private key sl, and sends the service result sr to the anonymizer. Then, the anonymizer returns Q4 to RV. Finally, RV gets sr with its private key.

4. Security Analysis

The security analysis focuses on how the TCA scheme protects RV’s location privacy against dishonest CVs, malicious LSP, and transmission leakage.

4.1 Privacy against Dishonest CVs

Challenge 1: Dishonest CVs may enable the real location of the RV to be guessed by sending fake location information during the process of cloaking area construction.

Lemma 1: TCA is resistant to threats from dishonest CVs.

Proof: In the TCA scheme, only the CVs with higher trust values can be selected to construct the cloaking area, whereas dishonest CVs will be filtered out. So, the TCA scheme can avoid the real location of the RV being guessed.

4.2 Privacy against Malicious LSP

Challenge 2: Malicious LSP may reveal RV’s real location by inferring sensitive contents from the request information.

Lemma 2: TCA is resistant to attacks from malicious LSP.

Proof: In the TCA scheme, the anonymizer transfers the trusted cloaking area containing at least k-1 CVsto LSP, instead of RV’s real location. Hence, LSP could guess RV’s true location with a probability of 1/k. LSP can obtain and decrypt the request content rc during LBS request. However, due to the policy of dynamic changing pseudonyms of RV, rc cannot be relevant to a specific vehicle. To solve a length of l hash function inversely, the time complexity is O(2l-1). Therefore, there is a computing challenge that LSP should calculate ID=H-1(PID) to get the identity information of the vehicle. That is, LSP is hard to obtain the correct identity information.

4.3 Privacy against Transmission Leakage

Challenge 3: An attacker may obtain RV’s request content and LSP’s results in the process of transmission.

Lemma 3: TCA is resistant to transmission leakage.

Proof: When the LBS request is sent by RV, the requested content of RV is encrypted by LSP. No one except LSP can decrypt the requested content. When the service result sr is provided by LSP, it is signed with the private key of LSP and encrypted with the public key of the anonymizer. Therefore, attackers cannot get any useful information from RV during the transmission.

5. Performance Analysis

5.1 Simulation Setup

To evaluate the effectiveness of the TCA scheme, a performance simulation was conducted. The simulation experiments were implemented with the PyCharm platform and Python language and performed on machines with Intel Core-i5 3.4-GHz, 8 GB RAM, and Windows 10 OS. The simulation elements are shown in Table 1.

Table 1. Simulation elements

E1KOBZ_2022_v16n10_3438_t0001.png 이미지

The simulation was performed in a cycle-based manner. In each cycle simulation round, one vehicle would be selected randomly as the RV, and then k-1 CVs were selected to participate in the location privacy protection. The behavior rule followed by honest CVs is to always report real location, while dishonest CVs submit fake location.

5.2 Simulation Result

In this section, we analyze the experimental results of the TCA scheme in terms of malicious responses, detection and false alarm rate, location privacy leakage, and overload. In order to analyze the effectiveness, we make a comparison between TCA scheme and the traditional collaborative k-anonymity (C-k) construction scheme without a trust mechanism.

5.2.1 Malicious Response

Dishonest CVs forge location information during the cloaking area construction in every cycle. This may result in a significant number of malicious responses that waste network resources. Thus, it is necessary to analyze whether the TCA scheme has an inhibitory effect on malicious responses. The comparison between TCA and C-k scheme without trust mechanism is shown in Fig. 8. In this simulation, the trust value threshold δ is set as 0.2, 0.5, and 0.8, respectively.

E1KOBZ_2022_v16n10_3438_f0008.png 이미지

Fig. 8. Comparison of suppressing malicious responses between TCA scheme and C-k scheme.

It is obvious that the malicious response of TCA and C-k both increase linearly. With the increase of δ, the malicious response decreases sharply. Even when δ = 0.2, the malicious response of the TCA scheme is almost one-fourth of that of the C-k scheme. And when δ = 0.5, the malicious response of the TCA scheme is only 22% of the C-k scheme. Especially at δ = 0.8, the performance of TCA is the best, and the effect of malicious response is negligible. However, with the growth of δ, the requirement for vehicle honesty is higher, and the construction of a cloaking area is relatively more difficult. Even so, as shown in Fig. 8, the TCA has better performance in suppressing malicious responses than the C-k scheme, which indicates that the trust mechanism is feasible and important for constructing a cloaking area.

5.2.2 Detection and False Alarm Rate

Generally speaking, less than 10% of dishonest CVs poses a limited threat and over 50% would collapse the entire network. Therefore, the parameter per is set between 10% and 50% in the simulations.

As shown in Fig. 9, with the percentage of dishonest CVs increasing, the detection rate and false alarm rate of C-k increased and decreased linearly, respectively. However, the TCA scheme maintains a high detection rate and a low false alarm rate, and the growth and decline speed is relatively small. Thus, the trust mechanism plays a huge role in improving the performance of the TCA scheme.

E1KOBZ_2022_v16n10_3438_f0009.png 이미지

Fig. 9. Comparison of detection and false alarm rate between TCA and traditional C-k scheme under the percentage of dishonest CVs. (a), (b) and (c) are the detection rate with δ= 0.2, 0.5, and 0.8, (d), (e) and (f) are the false alarm rate with δ = 0.2, 0.5, and 0.8.

5.2.3 Location Privacy Leakage

We also analyze the effect of the TCA scheme on privacy preservation from the perspective of location privacy leakage probability. In the simulation, the percentage of dishonesty is set at 30%. The ELPP scheme [19] is the first to provide enhanced location privacy for the LBS environment, which employs an entity-named function generator to protect location privacy. The SLP scheme [20] selects a trustworthy candidate of the next-hop to assist in constructing the cloaking area without the trusted third party. The comparison among our TCA scheme, the ELPP scheme and the SLP scheme is shown in Fig. 10. With trust mechanism, our TCAC scheme can get the lowest probability of location privacy leakage. It can be found that with the increase of the number of simulations, the leakage probability of the ELPP scheme is stable and always around 0.75, SLP scheme is stable and always around 0.62; and the TCA scheme is stable at around 0.24.

E1KOBZ_2022_v16n10_3438_f0010.png 이미지

Fig. 10. Comparison of location privacy leakage among the TCA scheme, ELPP scheme and SLP scheme.

5.2.4 Overload Analysis

As we know, an anonymizer is an important component of the trusted third-party structure. Once the anonymizer is unable to handle a large amount of data, the system will be blocked. Therefore, we analyze the overload that the anonymizer executes TCA scheme in terms of communication and processing time costs. The anonymity degree k will be varied from 10 to 80 during the comparison among TCA, SLP and ELPP schemes.

As shown in Fig. 11, the processing time and communication cost of the ELPP scheme and SLP scheme increase sharply with k, while both of them increase more smoothly in the TCA scheme. When k = 70, the processing time of ELPP and SLP scheme is about 5 times and 4 times that of TCA scheme, respectively, and the communication cost of ELPP and SLP scheme is about 4 times and 3 times that of TCA scheme, respectively. The reason for this result is that the anonymizer in the ELPP scheme is not only used to build the cloaking area but also to filter query results. In the TCA scheme, the anonymizer only needs to perform several matching and comparison operations during the cloaking area construction.

E1KOBZ_2022_v16n10_3438_f0011.png 이미지

Fig. 11. Comparison of overload among the TCA scheme, SLP scheme and ELPP scheme.​​​​​​​

In addition, the analysis that how the variable radius r and anonymity degree k affect the computation time in the TCA scheme is shown in Fig. 12. The computation time increases with the increase of r. This is because the larger the variable radius r is, the larger the search range and the higher processing time will be required. However, it can also be seen from Fig. 12 that when anonymity degree k is determined, the influence of the variable radius r on the computation time is very limited, while the influence of the change of k value is very large. When k is set as a larger value, such as 80, more CVs will be searched within the same r. In this case, the processing time will be higher.

E1KOBZ_2022_v16n10_3438_f0012.png 이미지

Fig. 12. Influence of r and k on computation cost. (a) shows the effect of r on the consumption time when three k values are taken respectively, and (b) shows the effect of k when three r values are taken.​​​​​​​

6. Discussion

Noting that how to efficiently defend against location cheating behaviors launched by dishonest CVs has become a challenging issue to achieve better spatial cloaking, we propose an edge-assisted Trusted Collaborative Anonymity construction scheme called TCA to protect the location privacy of vehicles by introducing trust mechanism. From the design idea of trusted observations within variable radius r, the trust value is not only used to verify false location information from dishonest CVs, but also utilized to select honest CVs to construct the trusted cloaking area by restricting r’s search range. To make our major contribution clear, we take the online car hailing as an instance to discuss the possible application of our TCA scheme.

In recent years, online car hailing is a new trend in the field of transportation system, which is the combination of new concepts and traditional industries. It combines information technology and digital taxi, and has changed people's traditional way of taking taxi. Users can book vehicles online anytime and anywhere, which brings great convenience to people's travel.

In order to guarantee the safe and normal operation of online car hailing system, it is assumed that the system should meet the following two conditions:

a) The service provider is reliable and will not disclose the existing information about users and drivers.

b) Drivers are always honest and provide accurate location information for users.

Obviously, the operation of online car hailing service is based on reliable location information, which requires users to provide real location information. However, as shown in Fig. 13, for some purpose, the user may submit his location beyond the actual location. When the user forges the location without verification, it can disturb the normal operation of online car hailing, resulting in a waste of resources. At present, it is very simple for users to forge the current location by using smart phone applications Locaholic or FakeLocation. Since the online car hailing system can't find out whether the user is using applications such as "fake location", it's difficult to detect the false location.

E1KOBZ_2022_v16n10_3438_f0013.png 이미지

Fig. 13. User fakes a location, leading to the car driving to false location​​​​​​​.

Fortunately, the TCA scheme can be used to identify the false location of online car hailing system. Each user can play the role of a requester who sends location information to request a specific service, or an observer who returns observations related to verification results. In order to distinguish malicious from honest users, trust mechanism can be used to evaluate the trust value of users. If malicious users always report false location information or wrong observations, they will get lower trust value than honest users. On this basis, the requester trust value (CT) and the observer trust value (OT) can be computed respectively. The comprehensive trust value (ST) is obtained by CT and OT value. Then, the real location can be verified according to three trust judgments: J1, J2 and J3.

7. Conclusion

In this paper, we propose an edge-assisted trusted collaborative anonymity scheme called TCA to protect location privacy in VANETs by introducing the trust mechanism. The TCA scheme is designed in three successive modules: edge-assisted trust value evaluation trusted observation within variable radius r and trusted cloaking area construction. In the TCA scheme, the trust value is not only used to select honest CVs to construct a cloaking area by restricting r’s search range but also utilized to verify false location information from dishonest CVs. Assisted by edge computing, RSUs that play the role of edge nodes can rapidly evaluate trust value by means of aggregated trust-related data. The performance analysis indicates that our TCA scheme is resilient to restrain dishonest CVs, and merely requires limited processing time and communication cost. Meanwhile, the simulation results show that our TCA scheme can effectively reduce the location privacy leakage better than the traditional ELPP and SLP scheme. For future works, we are planning to investigate artificial intelligence in driverless vehicles and study privacy protection from the perspective of driverless vehicles. In addition, as the combination of edge computing and artificial intelligence, edge intelligence is also one of the topics that need to be concerned in the future.

참고문헌

  1. B. Ghosh, M. T. Asif, J. Dauwels, U. Fastenrath, and H. Guo, "Dynamic prediction of the incident duration using adaptive feature set," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 11, pp. 4019-4031, 2019. https://doi.org/10.1109/tits.2018.2878637
  2. C. Tan, S. Bei, Z. Jing, and N. Xiong, "An Atomic Cross-Chain Swap-Based Management System in Vehicular Ad Hoc Networks," Wireless Communications and Mobile Computing, vol. 2021, pp. 1-14, 2021.
  3. S. Chen, A. Fu, J. Shen, S. Yu, H. Wang, and H. Sun, "RNN-DP: A new differential privacy scheme base on Recurrent Neural Network for Dynamic trajectory privacy protection," Journal of Network and Computer Applications, vol. 168, pp. 102736, 2020.
  4. P. Yang, N. Xiong, and J. Ren, "Data security and privacy protection for cloud storage: a survey," IEEE Access, vol. 8, pp. 131723-131740, 2020. https://doi.org/10.1109/access.2020.3009876
  5. J. Cui, J. Wen, S. Han, and H. Zhong, "Efficient privacy-preserving scheme for real-time location data in vehicular ad-hoc network," IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3491-3498, 2018. https://doi.org/10.1109/jiot.2018.2797206
  6. X. Zhang, X. Gui, and Z. Wu, "Privacy preservation for location-based services: a Survey," Journal of Software, vol. 9, no. 26, pp. 2373-2395, 2015.
  7. S. Dietzel, J. Petit, G. Heijenk, and F. Kargl, "Graph-based metrics for insider attack detection in VANET multihop data dissemination protocols," IEEE Trans. Veh. Technol, vol. 62, no. 4, pp. 1505-1518, 2013. https://doi.org/10.1109/TVT.2012.2236117
  8. X. Li, M. Mao, H. Liu, J. Ma, and K. Li, "An incentive mechanism for K-anonymity in LBS privacy protection based on credit mechanism," Soft Computing, vol. 21, no. 1, pp. 3907-3917, 2017. https://doi.org/10.1007/s00500-016-2040-2
  9. S. Zhang, G. Wang, M. Z. A. Bhuiyan, and Q. Liu, "A dual privacy preserving scheme in continuous location-based services," IEEE Internet of Things Journal, vol. 5, no. 5, pp. 4191-4200, Oct. 2018. https://doi.org/10.1109/jiot.2018.2842470
  10. Y. Long, Y. Chen, W. Ren, H. Dou, and N. Xiong, "DePET: A decentralized privacy-preserving energy trading scheme for vehicular energy network via blockchain and K-anonymity," IEEE Access, vol. 8, pp. 192587-192596, 2020. https://doi.org/10.1109/access.2020.3030241
  11. J. Feng, N. Liu, J. Cao, Y. Zhang, and G. Lu, "Securing traffic-related messages exchange against inside-and-outside collusive attack in vehicular networks," IEEE Internet of Things Journal, vol. 6, no. 6, pp. 9979-9992, 2019. https://doi.org/10.1109/jiot.2019.2933632
  12. M. Gruteser and D. Grunwald, "Anonymous usage of location-based services through spatial and temporal cloaking," in Proc. of ACM Mobisys, San Francisco, CA, USA, pp. 31-42, May. 5-8, 2003.
  13. H. Yu, G. Li, and J. Wu, "A location-based path privacy protection scheme in Internet of Vehicles," in Proc. of INFOCOM 2020, Toronto, ON, Canada, pp. 665-670, 2020.
  14. H. Lim and T. Chung, "A Survey on Privacy Problems and Solutions for VANET Based on Network Model," in Proc. of International Conference on Algorithms and Architectures for Parallel Processing, pp. 74-88, 2011.
  15. B. Abdelwahab, S. Sidi-Mohammed, and M. Samira, "VLPZ: The Vehicular Location Privacy Zone," Procedia Computer Science, vol. 83, pp. 369-376, 2016. https://doi.org/10.1016/j.procs.2016.04.198
  16. A. Pingley, N. Zhang, X. Fu, H. Choi, S. Subramaniam, and W. Zhao, "Protection of query privacy for continuous location-based services," in Proc. of 2011 Proc. INFOCOM, Shanghai, China, pp. 1710-1718, Apr. 10-15 2011.
  17. T. Dargahi, M. Ambrosin, M. Conti, and N. Asokan. "ABAKA: A novel attribute-based kanonymous collaborative solution for LBSs," Computer Communications, vol. 85, pp. 1-13, 2016. https://doi.org/10.1016/j.comcom.2016.03.002
  18. Y. Zheng, J. Luo, and T. Zhong, "Service recommendation middleware based on location privacy protection in VANET," IEEE Access, vol. 8, pp. 12768-12783, 2020. https://doi.org/10.1109/access.2020.2964422
  19. T. Peng, Q. Liu, and G. Wang, "Enhanced location privacy preserving scheme in location-based services," IEEE Systems Journal, vol. 11, no. 1, pp. 219-230, 2017. https://doi.org/10.1109/JSYST.2014.2354235
  20. B. Ying and A. Nayak, "A distributed social-aware location protection method in untrusted vehicular social networks," IEEE Transactions on Vehicular Technology, vol. 68, pp. 6114-6124, 2019. https://doi.org/10.1109/tvt.2019.2906819
  21. Y. Zhang, W. Tong, and S. Zhong, "On designing satisfaction-ratio-aware truthful incentive mechanisms for k-anonymity location privacy," IEEE Transactions on Information Forensics and Security, vol. 11, no. 11, pp. 2528-2541, 2016. https://doi.org/10.1109/TIFS.2016.2587241
  22. Y. Wang, M. Li, S. Luo, Y. Xin, and Y. Yang, "LRM: A Location Recombination Mechanism for Achieving Trajectory k-Anonymity Privacy Protection," IEEE Access, vol.7, pp. 182886-182905, 2019. https://doi.org/10.1109/access.2019.2960008
  23. R. Muhlbauer and J. H. Kleinschmidt, "Bring your own reputation: a feasible trust system for vehicular ad hoc networks," Journal of Sensor Actuator Networks, vol. 7, no. 3, pp.1-23, Sept. 2018. https://doi.org/10.3390/jsan7010001
  24. C. Lai, K. Zhang, N. Cheng, H. Li, and X. Shen, "SIRC: A secure incentive scheme for reliable cooperative downloading in highway VANETs," IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 6, pp. 1559-1574, 2017. https://doi.org/10.1109/TITS.2016.2612233
  25. J. Oluoch, "A distributed reputation scheme for situation awareness in vehicular ad hoc networks (VANETs)," in Proc. of CogSIMA 2016, San Diego, CA, USA, Mar.21-25, pp. 1-5, 2016.
  26. Z. Yang, K. Yang, L. Lei, K. Zheng, and V. Leung, "Blockchain-based decentralized trust management in vehicular networks," IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1495-1505, 2019. https://doi.org/10.1109/jiot.2018.2836144
  27. A. Josang and R. Ismail, "The beta reputation system," in Proc. of the 15th Bled Electronic Commence Conference, pp.1-14, Jun. 17-19 2002.