DOI QR코드

DOI QR Code

Distributed Target Localization with Inaccurate Collaborative Sensors in Multipath Environments

  • Feng, Yuan (China Research and Development Academy of Machinery Equipment) ;
  • Yan, Qinsiwei (School of Information and Electronics, Beijing Institute of Technology) ;
  • Tseng, Po-Hsuan (Department of Electronic Engineering, National Taipei University of Technology) ;
  • Hao, Ganlin (School of Information and Electronics, Beijing Institute of Technology) ;
  • Wu, Nan (School of Information and Electronics, Beijing Institute of Technology)
  • Received : 2018.10.11
  • Accepted : 2018.11.21
  • Published : 2019.05.31

Abstract

Location-aware networks are of great importance for both civil lives and military applications. Methods based on line-of-sight (LOS) measurements suffer sever performance loss in harsh environments such as indoor scenarios, where sensors can receive both LOS and non-line-of-sight (NLOS) measurements. In this paper, we propose a data association (DA) process based on the expectation maximization (EM) algorithm, which enables us to exploit multipath components (MPCs). By setting the mapping relationship between the measurements and scatters as a latent variable, coefficients of the Gaussian mixture model are estimated. Moreover, considering the misalignment of sensor position, we propose a space-alternating generalized expectation maximization (SAGE)-based algorithms to jointly update the target localization and sensor position information. A two dimensional (2-D) circularly symmetric Gaussian distribution is employed to approximate the probability density function of the sensor's position uncertainty via the minimization of the Kullback-Leibler divergence (KLD), which enables us to calculate the expectation step with low computational complexity. Moreover, a distributed implementation is derived based on the average consensus method to improve the scalability of the proposed algorithm. Simulation results demonstrate that the proposed centralized and distributed algorithms can perform close to the Monte Carlo-based method with much lower communication overhead and computational complexity.

Keywords

1. Introduction

Wireless sensor networks (WSNs) consist a large number of power-limited sensors that use wireless transmission to communicate with neighbors [1]. They can collaboratively estimate the position and state of a target with limited power and computational capabilities. WSNs localization has been employed in many applications such as habitat monitoring, customers tracking, and control of appliances [2]-[5].

For line-of-sight (LOS) propagation scenarios, existing algorithms for cooperative localization [6], object tracking [7], [8] and simultaneous localization and mapping (SLAM)[9] have been well studied. However, for the applications in commercial shopping malls, urban canyons and jungles with various obstacles, these algorithms designed for LOS signals result in severe performance loss due to the multipath components (MPCs) in the received signal, which are a mixture of LOS and a bunch of non-line-of-sight (NLOS) components [10]-[12]. Therefore, studies on how to deal with MPCs in WSNs localization is interesting and challenging. An iterative method is proposed in [10] to estimate the MPCs' ranging probability density function (pdf) based on a Gaussian mixture model. However, the assumption that all the received measurements are a mixture from both LOS and NLOS paths may lead to performance loss, especially when sensors only receive one kind of  measurement.

A method based on detection and elimination of NLOS components in MPCs is proposed in [13]. Moreover, [14] and [15] use plentiful LOS components to replace NLOS measurements. However, the performance of these methods become very poor when sensors cannot receive enough LOS components. In [16], it is proved that single reflections of NLOS components contain additional position information which can effectively improve the localization accuracy. For convenience, we use NLOS to represent the single reflection in the remaining part.

Cooperative localization algorithms based on TOA measurements have been studied in[17], [18], in which the NLOS measurements are also taken into consideration. Given the map of layout, a TOA based localization algorithm that utilizes NLOS measurements is proposed in [19]. Although superior performance can be observed, the requirement of detailed map information is impractical. To solve this problem, methods based on the main scatter's number and heading angles are proposed in [20]-[23], which do not need layout information. Meanwhile, considering sensors and target may be noncooperative, an RF localization method is proposed in [22], using both the time-difference-of-arrival (TDOA) and angle-of-arrival (AOA) measurements, which makes great contributions to the availability of localization system. Nevertheless, this method relaxes layout constraint by a strong assumption that each sensor only receives one NLOS path with known data association (DA) information, which may have poor performance in complex multipath environments. Moreover, the above researches do not consider the misalignment or movement of sensors in WSNs, which may also impact the location performance [24]-[26]. In this paper, we first propose a method to solve the DA problem by giving the number of main scatters and their heading angles. Then we extend our algorithm by considering the position uncertainty of sensors.

Practically, sensors are often randomly distributed with limited power. In a centralized implementation, the raw measurements obtained by sensors have to be transmitted to a data fusion center (FC) which is probably far away. The stability of such centralized algorithms can not be guaranteed. Alternatively, a consensus-based [27]-[29] distributed implementation without FC is proposed in this paper. 

In this paper, we focus on a 2-D target localization problem based on TDOA measurements with inaccurate collaborative sensors in a multipath environment. Expectation maximization (EM)-based DA process is proposed based on the Gaussian mixture model. We propose our centralized space-alternating generalized expectation maximization (SAGE) -based [31] algorithm to locate the target and remove the effects of inaccurate sensors [32], in which the minimization of Kullback-Leibler Divergence and Taylor expansions have been employed to reduce the complexity and derive a closed-form expression in the E-step. Finally, a consensus-based distributed implementation of the proposed algorithm is given. The main contributions of this paper can be summarized as follows:

• We propose a novel EM-based DA process using a Gaussian mixture model in multipath environments based on the TDOA and AOA measurements. Therefore, both LOS and NLOS components can be utilized in target localization. 

• A centralized SAGE-based algorithm is proposed to jointly estimate the target position and update the position information of inaccurate sensors. The distribution of sensor uncertainty is approximated to a 2-D circularly symmetric Gaussian distribution via KLD minimization. Taylor expansion is employed to linearize the nonlinear confluent hypergeometric function. Accordingly, a closed-form expression can be derived in the E-step and the complexity of the proposed algorithm can be significantly reduced.

• To further improve the scalability of the target localization in complex scenarios, a beneficial distributed implementation of the proposed algorithm is given based on the average consensus. 

The remainder of this paper is organized as follows. Section II introduces the system
model and the problem at hand. Section III gives the proposed EM-based DA method on a Gaussian Mixture model. Section IV elaborates the centralized algorithm for joint considerations of localization and sensor position uncertainty and derives the distributed implementation of the proposed algorithm based on average consensus. Comprehensive  simulations are shown in Section V. Finally, concluding remarks are made in section VI.

2. System Model

We consider a 2-D target localization in a multipath environment, as shown in Fig. 1. Assume that the number of scatters L and the orientations of scatters are known. There are a total of N sensors, in which each sensor may receive signals of a target from LOS and/or NLOS paths. The number of signal paths (i.e., measurements) received by sensor i (i.e., \(S_i\) ) is denoted as \(M_i\) . Using ultra-wideband technologies, these signals can be distinguished. We define \(i \in \mathcal{N}_{\mathrm{LOS}}\) if \(S_i\) received signal from LOS path; otherwise, \(i \in \mathcal{N}_{\backslash L O S}\) . We denote \(\mathcal{N}=\{1,2, \ldots \mathrm{N}\}\) as the set of the indexes of sensors, where \(\mathcal{N}=\mathcal{N}_{| L O S} \cup \mathcal{N}_{L O S}\) .

Since the target is non-cooperative, all the sensors have to resort to the TDOA. Without loss of generality, the first measurement of sensor 1, i.e., \(S_1\) , is used as the reference. Then, the j -th TDOA measurement received by the i -th sensor \(\Delta \hat{d}_{i, j}\) is given as follows

\(\Delta \hat{d}_{i, j}=\mathrm{g}\left(\theta_{i, j}, \gamma_{k}\right)^{T}\left(\mathbf{q}-\overline{\mathbf{p}}_{i}\right)-\mathrm{g}\left(\theta_{l, 1}, \gamma_{1,1}\right)^{T}\left(\mathbf{q}-\mathbf{p}_{1}\right)+\tilde{n}_{i, j},\)      (1)

where \(\mathbf{q}=\left[x_{q} y_{q}\right]^{T}\) is the ground-truth position of the target. \(\overline{\mathbf{p}}_{i}=\left[\bar{x}_{i} \bar{y}_{i}\right]^{T}\) is the recorded position of \(S_i\) for \(i \in \mathcal{N}\). \(\theta_{i, j}\)is the AOA of the j -th signal path at \(S_i\) . \(\gamma_{k}\) is the orientation of the k -th scatter, which is associated to the measurement \(\Delta \hat{d}_{i, j}\), for \(k \in\{1, \ldots, \mathrm{L}\}\). \(\gamma_{1,1}^{\prime}\) is assumed to be known as the orientation of the scatter associated to sensor 1, which is explained and proved in [22].,
\(\tilde{n}_{i, j}\) is the TDOA measurement noise, which is a zero-mean Gaussian 
random variable with variance \(\sigma_{i, j}^{2}\) . The function \(g\left(\theta_{i, j}, \gamma_{k}\right)\) is given by

\(\mathrm{g}\left(\theta_{i, j}, \gamma_{k}\right)=\frac{1}{\cos \left(\theta_{i, j}-\gamma_{k}\right)}\left[\begin{array}{c} \cos \gamma_{k} \\ \sin \gamma_{k} \end{array}\right],\)      (2)

which can be derived by projecting distance along the direction between the target and \(S_i\) . Besides, the AOA measurement  \(\hat{\theta}_{i, j}\) is contaminated by noise, which can be modeled as \(\hat{\theta}_{i, j}=\theta_{i, j}+\eta_{i, j}\), where \(\eta_{i, j}\) is the noise with uniform distribution.

 

Fig. 1. An example of the NLOS and LOS paths from the target to different sensors.

In (1), besides the target position q , the scatter index k associated to the measurement  \(\Delta \hat{d}_{i, j}\) can not be directly observed. Moreover, the ground-truth position of sensor \(P_i\) might not be available due to misalignment of sensors. We further model the two types of uncertainty in the following subsections.

2.1 The Uncertainty of Model Coefficients

Generally, the uncertainty of model coefficients of the range measurement includes two questions: (i) Is the ranging measurement coming from LOS or NLOS signal path? (ii) If a NLOS path exists, which scatterer is the signal reflected from? In the data association problem, we treat the scatter index k in (1) as an unknown variable. The first measurement in each sensor has the probability that can come from the LOS path or the NLOS path from the scatter k , the first measurement (  j = 1 ) of each sensor has L + 1 submodels in Gaussian mixture model; for the remaining measurements, there are L submodels. We denote the number of the
submodels \(K_{i,j}\) from the j -th measurement of the i -th sensor as

\(K_{i, j}=\left\{\begin{array}{ll} L+1, & j=1 \\ L, & j \neq 1. \end{array}\right. \)     (3)

We define a latent variable for the mapping information \(\rho_{i, j, k}\). When the j -th
measurement comes from the k -th model, \(\rho_{i, j, k}=1\). Otherwise, \(\rho_{i, j, k}=0\). Since  the measurement \(\Delta \hat{d}_{i, j}\) in (1) follows Gaussian distribution, without other information, the likelihood function of each sensor becomes a Gaussian mixture model [33]. Here, we define \(\boldsymbol{\rho}=\left\{\left\{\left\{\rho_{i, j, k}\right\}_{k=1}^{\mathrm{K}_{i, j}}\right\}_{j=1}^{M_{i}}\right\}_{i=1}^{N}\) and  \(\boldsymbol{\alpha}=\left\{\left\{\left\{\alpha_{i, j, k}\right\}_{k=1}^{\mathrm{K}_{i, j}}\right\}_{j=1}^{M_{i}}\right\}_{i=1}^{N}\), where i\(M_{i}\)s the number of measurements at the i -th sensor. Then we have

  \(p\left(\Delta \hat{d}_{i, j} | \mathbf{q}, \theta_{i, j}, \theta_{1,1}, \mathbf{\rho}, \boldsymbol{\alpha}\right)=\sum_{k=1}^{K_{i, j}}\left[\alpha_{i, j, k} \rho_{i, j, k} \Phi\left(\Delta \hat{d}_{i, j} | \mathbf{q}, \theta_{i, j}, \theta_{1,1}\right)\right]\)     (4)

where \(\alpha_{i, j, k}\) is the weight, \(\alpha_{i, j, k} \geq 0\)\(\sum_{k=1}^{K_{i, j}} \alpha_{i, j, k}=1\).  In (4), the first path includes L + 1 possible modes, and the other path includes L possible modes at each sensor.

Moreover, we have

\(\Phi\left(\Delta \hat{d}_{i, j} | \mathbf{q}, \theta_{i, j}, \theta_{1,1}\right)=\frac{1}{\sqrt{2 \pi} \sigma} \exp \left(-\frac{\left(\Delta \dot{d}_{i, j}-\mu_{i, j}\right)^{2}}{2 \sigma^{2}}\right),\)        (5)

with

\(\mu_{i, j}=\mathrm{g}\left(\hat{\theta}_{i, j}, \gamma_{k}\right)^{T}\left(\mathbf{q}-\overline{\mathbf{p}}_{i}\right)-\mathrm{g}\left(\hat{\theta}_{1,1}, \gamma_{1,1}^{\prime}\right)^{T}\left(\mathbf{q}-\mathbf{p}_{1}\right).\)       (6)

Here, for  j = 1, the ( L + 1) -th submodel corresponds to the measurement coming from the LOS path. Therefore, in this case, we treat \(\gamma_{L+1}=\theta_{i, 1}\). Based on the model in (4), the problem of the detection of LOS or NLOS path and that of the estimation of the signal reflected from which scatter are transferred into the estimation of weight  \(\alpha_{i, j, k}\) , by introducing a latent variable \(\rho_{i, j, k}\) .

2.2 The Uncertainty of Sensor Positions

In practice, the position of sensors could be inaccurate due to the misalignment of sensors. We denote the position uncertainty of the i -th sensor as \(\Delta \mathbf{p}_{i}=\left[\Delta x_{i} \Delta y_{i}\right]^{T}\) . The ground-true position of sensor i is \(\mathbf{p}_{i}=\left[x_{i} y_{i}\right]^{T}\), which can be written as \(\mathbf{p}_{i}=\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}\) . Then, we have 

\(\Delta \hat{d}_{i, j}=\mathrm{g}\left(\hat{\theta}_{i, j}, \gamma_{k}\right)^{T}\left(\mathbf{q}-\overline{\mathbf{p}}_{i}-\Delta \mathbf{p}_{i}\right)-\mathrm{g}\left(\hat{\theta}_{1,1}, \theta_{1,1}\right)^{T}\left(\mathbf{q}-\mathbf{p}_{1}\right)+\tilde{n}_{i, j}.\)        (7)

The sensor's position uncertainty \(\Delta \mathbf{p}_{i}\) is modeled as a 2-D Gaussian distribution with zero mean and covariance matrix \(\boldsymbol{\Sigma}_{\Delta \mathbf{p}_{i}}\) i.e.,

\(p\left(\Delta \mathbf{p}_{i}\right)=\frac{1}{\sqrt{(2 \pi)^{2}}\left|\Sigma_{\Delta p_{i}}\right|^{\frac{1}{2}}} \exp \left\{-\frac{1}{2} \Delta \mathbf{p}_{i}^{\mathrm{T}} \Sigma_{\Delta \mathrm{p}_{i}}^{-1} \Delta \mathbf{p}_{i}\right\},\)         (8)

with a covariance matrix

\(\boldsymbol{\Sigma}_{\Delta p_{i}}=\left[\begin{array}{cc} \sigma_{\Delta x_{i}}^{2} & \omega \sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}} \\ \omega \sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}} & \sigma_{\Delta y_{i}}^{2} \end{array}\right],\)        (9)

where \(\sigma_{\Delta x_{i}}\)  and \(\sigma_{\Delta y_{i}}\) are the standard deviations on x-axis and y-axis, respectively, and ω is the correlation between x-axis and y-axis.

3. Expectation Maximization-based Data Association

In this section, we first assume sensor positions are accurate, i.e., \(\mathbf{p}_{i}=\overline{\mathbf{p}}_{i}\). In the
proposed DA process, we are interested in parameters \(\mathbf{x}=[\mathbf{q}, \boldsymbol{\theta}, \boldsymbol{\alpha}]^{T}\). The log-likelihood function is given by considering the mapping information \(\rho_{i, j, k}\) as follows

\(\begin{aligned} \ln p(\Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \boldsymbol{\rho} | \mathbf{x}) &=\ln p\left(\left\{\left\{\Delta \hat{d}_{i, j}, \hat{\theta}_{i, j}\right\}_{j=1}^{M_{i}}\right\}_{i=1}^{N},\left\{\{\left\{\rho_{i, j, k}\right\}_{k=1}^{\left.K_{i j}\right\}} \underbrace{M_{i}}\}_{j=1}^{N} | \mathbf{x}\right)\right.\\ =& \sum_{i=1}^{N} \sum_{j=1}^{M_{i}} \ln p\left(\hat{\theta}_{i, j} | \theta_{i, j}\right)+\sum_{i=1}^{N} \sum_{j=1}^{M_{i}} \sum_{k=1}^{K_{i j}} \ln p\left(\rho_{i, j, k} | \mathbf{q}, \theta_{i, j}, \alpha_{i, j, k}\right) \\ &+\sum_{i=1}^{N} \sum_{j=1}^{M_{i}} \sum_{k=1}^{K_{i j}} \ln p\left(\Delta \hat{d}_{i, j} | \mathbf{q}, \theta_{i, j}, \alpha_{i, j, k}, \rho_{i, j, k}\right), \end{aligned}\)       (10)

where

\(p\left(\rho_{i, j, k} | \mathbf{q}, \theta_{i, j}, \alpha_{i, j, k}\right) \propto p\left(\mathbf{q} | \rho_{i, j, k}, \theta_{i, j}, \alpha_{i, j, k}\right) p\left(\rho_{i, j, k}\right).\)        (11)

Since the space of possible target location is restricted to the area spanned by the
direction of the AOA at the sensor and the direction opposite to the angle of departure (AOD) at the target [22], we have \(p\left(\mathbf{q} | \rho_{i, j, k}, \theta_{i, j}, \alpha_{i, j, k}\right)=1\) in (11). As each model has a certain weight \(\alpha_{i, j, k}\), the prior of \(\rho_{i, j, k}\) can be written as

\(p\left(\rho_{i, j, k}\right)=\left(\alpha_{i, j, k}\right)^{\rho_{i, j} k}\left(1-\alpha_{i, j, k}\right)^{\left(1-\rho_{i, j}, k\right)},\)        (12)

1) E-step: The E-step is to account for the mapping information. Here, we have

\(\mathcal{Q}\left(\mathbf{x}, \mathbf{x}^{l-1}\right)=\sum p\left(\mathbf{\rho} | \Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \mathbf{x}^{l-1}\right) \cdot \ln p(\Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \boldsymbol{\rho} | \mathbf{x}),\)       (13)

where l is the number of iteration and

\(p\left(\boldsymbol{\rho} | \Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \mathbf{x}^{l-1}\right) \propto p\left(\Delta \hat{\mathbf{d}} | \mathbf{x}^{l-1}, \boldsymbol{\rho}\right) \cdot p\left(\boldsymbol{\rho} | \mathbf{x}^{l-1}\right).\)       (14)

2) M-step: The M-step is to maximize \(\mathcal{Q}\left(\mathbf{x}, \mathbf{x}^{l-1}\right)\) with respect to x , i.e.,

\(\mathbf{x}^{l}=\arg \max _{\mathbf{x}} \mathcal{Q}\left(\mathbf{x}, \mathbf{x}^{l-1}\right).\)       (15)

The E-step and M-step will repeat for \(\mathbf{N}_{\text {iter }}\) times until the estimation of target location q converges. Then, based on \(\alpha_{i, j, k}^{l}\) for \(k=1, \cdots, K_{i, j}\) , we can perform DA to determine the model coefficients. We set\(\rho_{i, j, k} \cdot=1\) , with \(k^{*}=\arg \max _{k} \alpha_{i, j, k}^{l}\) ; otherwise, \(\rho_{i, j, k}\) is zero in the Gaussian mixture model.

4. Target Localization with Inaccurate Collaborative Sensors

The position of sensors may be inaccurate in practice due to the misalignment of the sensors. In this case, the position error \(\left\{\Delta \mathbf{p}_{i}\right\}_{i=1}^{N}\)should be considered to improve the performance of target localization. However, when N is large, using the above EM algorithm directly will lead to huge computational complexity and low convergence rate. To solve this problem, we first approximate the posterior distribution of \(\Delta \mathbf{p}\) by a 2-D circularly symmetric Gaussian distribution via Kullback-Leibler divergence minimization [34]. Then, we propose a SAGE framework to update the target position estimate q . Since the sensor's processing ability is limited, we consider both the centralized and the distributed network implementations of the proposed algorithms in the following subsections. We will show that the distributed algorithm can attain the performance of the centralized algorithm by using only the local information at each sensor.

4.1 Centralized SAGE with Sensor Position Uncertainty

Based on the DA process, the uncertainty of model coefficients related to α is solved. Here, we define the parameters of interest as \(\mathbf{x}=[\mathbf{q}, \boldsymbol{\theta}]\) . Since the measurements are independent of each other, the likelihood function can be expressed as

\(p(\Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \overline{\mathbf{p}} | \mathbf{x}, \Delta \mathbf{p})=\prod_{i=1}^{N} \prod_{j=1}^{M_{i}} p\left(\Delta \hat{d}_{i, j}, \hat{\theta}_{i, j}, \overline{\mathbf{p}}_{i} | \mathbf{q}, \Delta \mathbf{p}_{i}, \theta_{i, j}\right).\)        (16)

Then, we have

\(\tilde{\mathcal{Q}}\left(\mathbf{x}, \mathbf{x}^{l-1}\right)=\int p\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \times \ln p\left(\Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \Delta \mathbf{p}_{i}, \overline{\mathbf{p}}_{i} | \mathbf{x}\right) d \Delta \mathbf{p}_{i},\)        (17)

where

\(\ln p\left(\Delta \hat{\mathbf{d}}, \hat{\boldsymbol{\theta}}, \Delta \mathbf{p}_{i}, \overline{\mathbf{p}}_{i} | \mathbf{x}\right)=\sum \sum_{i=1}^{N} \ln p\left(\hat{\theta}_{i, j} | \theta_{i, j}\right)+\sum \sum_{i=1}^{N} \operatorname{lnp}\left(\Delta \hat{d}_{i, j}, \Delta \mathbf{p}_{i}, \overline{\mathbf{p}}_{i} | \mathbf{x}\right).\)       (18)

Since the first term in (18) does not depend on x , it can be dropped during the
maximization of  \(\tilde{\mathcal{Q}}\left(\mathbf{x}, \mathbf{x}^{l-1}\right)\). Therefore, \(\mathcal{Q}\left(\mathbf{x}, \mathbf{x}^{l-1}\right)\) can be reformulated as

\(\mathcal{Q}\left(\mathbf{x}, \mathbf{x}^{l-1}\right)=\int p\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \times \ln \mathrm{p}\left(\Delta \hat{\mathbf{d}}, \Delta \mathbf{p}_{i}, \overline{\mathbf{p}}_{i} | \mathbf{x}\right) d \Delta \mathbf{p}_{i}.\)         (19)

Using Bayes' rule, \(p\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right)\) in (19) can be expressed as

\(p\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \propto p\left(\Delta \mathbf{p}_{i}\right) p\left(\Delta \hat{\mathbf{d}} | \overline{\mathbf{p}}_{i}, \mathbf{x}, \Delta \mathbf{p}_{i}\right),\)       (20)

where \(p\left(\Delta \hat{\mathbf{d}} | \overline{\mathbf{p}}_{i}, \mathbf{x}, \Delta \mathbf{p}_{i}\right)\) is the likelihood function corresponding to the j -th measurement of \(S_i\) . Since \(p\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right)\) is in general not Gaussian, no closed-form expression of (19) can be obtained. To solve this problem, we approximate \(p\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right)\) by a 2-D circularly symmetric Gaussian distribution, with zero mean and variance \(\sigma_{\Delta \mathbf{p}_{i}}^{2}\), i.e.,

\(f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right)=\frac{1}{2 \pi \sigma_{\Delta \mathrm{p}_{i}}^{2}} \exp \left(-\frac{\left\|\Delta \mathbf{p}_{i}\right\|^{2}}{2 \sigma_{\Delta \mathrm{p}_{i}}^{2}}\right),\)        (21)

The parameters in (21) can be optimized by minimizing the KLD, i.e.,

\(D_{K L}(f \| p)=\int f\left(\Delta \mathbf{p}_{i}\right) \ln \frac{f\left(\Delta \mathbf{p}_{i}\right)}{p\left(\Delta \mathbf{p}_{i}\right)} d \Delta \mathbf{p}_{i}.\)       (22)

By substituting (20) and (21) into (22), we have (23). More details are shown in Appendix A.

\(\begin{aligned} D_{K L}(f \| p)=& \int f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \ln f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) d \Delta \mathbf{p}_{i} \\ &-\int f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \ln p\left(\Delta \mathbf{p}_{i}\right) d \Delta \mathbf{p}_{i} \\ &-\int f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \times \ln \prod_{j \in M_{i}} p\left(\Delta \hat{d}_{i, j}, \hat{\theta}_{i, j} | \mathbf{q}, \overline{\mathbf{p}}_{i}, \Delta \mathbf{p}_{i}, \theta_{i, j}\right) d \Delta \mathbf{p}_{i} \end{aligned}\)       (23)

1) NLOS Case: For sensors that do not have LOS component to the target, by combining (46)-(49) in Appendix A, we obtain (24).

\(\begin{array}{c} D_{K L}(f \| p)=\mathcal{C}-\ln \sigma_{\Delta \mathbf{p}_{i}}^{2}+\frac{1}{2\left(1-\rho^{2}\right)}\left[\frac{\sigma_{\Delta \mathbf{p}_{i}}^{2}+\bar{x}_{i}^{2}}{\sigma_{\Delta x_{i}}^{2}}+\frac{\sigma_{\Delta \mathbf{p}_{i}}^{2}+\bar{y}_{i}^{2}}{\sigma_{\Delta y_{i}}^{2}}-\frac{2 \bar{x}_{i} \bar{y}_{i}}{\sigma_{\Delta x_{i}}} \sigma_{\Delta y_{i}}\right] \\ -\sum_{j=1}^{M_{i}} \frac{1}{2 \sigma_{i}^{2}}\left[2 E_{i, j} \bar{x}_{i}+2 F_{i, j} \bar{y}_{i}-2 a_{i, j} b_{i, j} \bar{x}_{i} \bar{y}_{i}-a_{i, j}^{2}\left(\sigma_{\Delta \mathbf{p}_{i}}^{2}+\bar{x}_{i}^{2}\right)-b_{i, j}^{2}\left(\sigma_{\Delta x_{i}}^{2}+\bar{y}_{i}^{2}\right)\right] \end{array}\)       (24)

where \(C\) is a constant.

Then the global \(\Delta \mathbf{p}^{l}\) can be expressed as

\(\Delta \mathbf{p}^{l}=\arg \min _{\Delta p}\left\{D_{K L}\left(f\left(\Delta \mathbf{p} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \| f\left(\Delta \mathbf{p} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right)\right)\right\}.\)       (25)

The solution of (25) can be obtained by setting the partial derivatives of
\(D_{K L}\left(f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right) \| f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{x}^{l-1}\right)\right)\) with respect to \(\bar{x}_{i}, \bar{y}_{i} \text { and } \sigma_{\Delta \mathrm{p}_{i}}^{2}\) to zeros.

After this, we have \(\Delta \mathbf{p}^{l}=\left(\Delta x_{i}^{l}, \Delta y_{i}^{l}\right)\) . And more details are shown in Appendix B. For the i-th sensor containing only NLOS components, we obtain (26).

\(\begin{aligned} \tilde{\mathcal{Q}}_{i, \mathrm{L} \operatorname{LOS}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \propto &-\sum_{j=1}^{M_{i}} \frac{1}{2 \sigma_{i}^{2}}\left[\left(a_{i, j}-a_{0}\right)^{2} q_{x}^{2}+\left(b_{i, j}-b_{0}\right)^{2} q_{y}^{2}-2\left(a_{i, j}-a_{0}\right) \mathrm{H}_{i, j} q_{x}\right.\\ &\left.-2\left(b_{i, j}-b_{0}\right) \mathrm{H}_{i, j} q_{y}+2 \mathrm{K}_{i, j} q_{x} q_{y}\right] \end{aligned}\)       (26)

2) LOS Case: Since the LOS component is much reliable than the NLOS ones, we drop the other MPCs in this case. Then, for sensor i we have \(D_{K L}(f \| p)\) as

\(\begin{aligned} D_{K L}(f \| p)=&-\ln \sigma_{\Delta \mathrm{p}_{i}}^{2}+\frac{1}{2\left(1-\rho^{2}\right)}\left[\frac{\sigma_{\Delta \mathrm{p}_{i}}^{2}+\bar{x}_{i}^{2}}{\sigma_{\Delta x_{i}}^{2}}+\frac{\sigma_{\Delta \mathrm{p}_{i}}^{2}+\bar{y}_{i}^{2}}{\sigma_{\Delta y_{i}}^{2}}-\frac{2 \bar{x}_{i} \bar{y}_{i}}{\sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}}}\right] \\ &-\frac{D_{i, j}}{\sigma_{i}^{2}}\left(\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|+\frac{\bar{x}_{i}-q_{x}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|} \bar{x}_{i}+\frac{\bar{y}_{i}-q_{y}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|} \bar{y}_{i}\right) \\ &+\frac{1}{2 \sigma_{i}^{2}}\left(2 \sigma_{\mathrm{Ap}_{i}}^{2}+\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|^{2}\right)+\mathcal{C} \end{aligned}\)       (27)

Similar to the NLOS case, we have \(\Delta \mathbf{p}^{l}=\left(\Delta x_{i}^{l}, \Delta y_{i}^{l}\right)\) , which is shown in Appendix B. Then, we can obtain the closed form of (26) as (28),

\(\begin{aligned} \tilde{\mathcal{Q}}_{i, \operatorname{los}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \propto & \ln \frac{p\left(\hat{\theta}_{i, j} | \theta_{i, j}\right) \alpha_{i, j, k}}{\sqrt{2 \pi \sigma_{i}^{2}}}-\frac{1}{2 \sigma_{i}^{2}}\left[D_{i, j}^{2}(\mathbf{q})-2 D_{i, j}(\mathbf{q}) \sigma_{\Delta \mathbf{p}, \sqrt{2}}\right.\\ &\left.\times_{1} F_{1}\left(-\frac{1}{2} ; 1 ;-\frac{\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{l-1}-\mathbf{q}\right\|^{2}}{2 \sigma_{\mathrm{Ap}_{i}}^{2}}\right)+2 \sigma_{\mathrm{Ap}_{i}}+\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{(-1}-\mathbf{q}\right\|^{2}\right] \end{aligned}\)      (28)

where \(_{1} F_{1}(a ; b ; c)\) is the first kind confluent hypergeometric function [35].
Since this hypergeometric function has a complex expression for analysis and calculation, we rely on the first order Taylor expansion around the previous estimation \(q^{l-1}\)to obtain an approximated estimation. We adopt similar operations to [32] but we use the first order Taylor series to estimate \(\mathbf{q}^{l}=\left[q_{x}^{l}, q_{y}^{l}\right]\) , i.e.,

\(f(\mathbf{q}) \approx f\left(\mathbf{q}^{l-1}\right)+f_{q_{r}}^{\prime}\left(\mathbf{q}^{l-1}\right)\left(q_{x}-q_{x}^{l-1}\right)+f_{q_{v}}^{\prime}\left(\mathbf{q}^{l-1}\right)\left(q_{y}-q_{y}^{l-1}\right)\)      (29)

where \(f(\mathbf{q})=_{1} F_{1}\left(-\frac{1}{2} ; 1 ;-\frac{\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{l}-\mathbf{q}\right\|^{2}}{2 \sigma_{\Delta \mathbf{p}_{i}}^{2}}\right)\) , then we have the following derivations [36].

\(f_{q_{x}}^{\prime}\left(\mathbf{q}^{l-1}\right)=\frac{q_{x}-\left(\Delta x_{i}^{\prime}+\bar{x}_{i}\right)}{2 \sigma_{\mathrm{Ap}_{i}}^{2}} F_{1}\left(\frac{1}{2} ; 2 ;-\frac{\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{l}-\mathbf{q}\right\|^{2}}{2 \sigma_{\mathrm{Ap}_{i}}^{2}}\right)\)       (30)

\(f_{q_{y}}^{\prime}\left(\mathbf{q}^{l-1}\right)=\frac{q_{y}-\left(\Delta y_{i}^{l}+\bar{y}_{i}\right)}{2 \sigma_{\Delta \mathbf{p}_{i}}^{2}} F_{1}\left(\frac{1}{2} ; 2 ;-\frac{\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{l}-\mathbf{q}\right\|^{2}}{2 \sigma_{\Delta \mathrm{p}_{i}}^{2}}\right)\)       (31)

Then (28) can be formulated as (32) by dropping the irrelevant terms.

\(\begin{array}{l} \tilde{\mathcal{Q}}_{i, l o s}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \propto-\frac{1}{2 \sigma_{i}^{2}}\left[D_{i, j}^{2}(\mathbf{q})-2 D_{i, j}(\mathbf{q}) \sigma_{\mathrm{Ap},} \sqrt{\frac{\pi}{2}} \times_{1} F_{1}\left(-\frac{1}{2} ; 1 ;-\frac{\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{\prime}-\mathbf{q}\right\|^{2}}{2 \sigma_{\mathrm{Ap}_{i}}^{2}}\right)\right. \\ \left.\quad+\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}^{\prime}-\mathbf{q}\right\|^{2}\right] \end{array}\)       (32)

By substituting (26) and (32) into (17) and dropping the irrelevant terms, we have

\(\tilde{\mathcal{Q}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)=\sum_{i \in N_{\text {less }}} \tilde{\mathcal{Q}}_{i, \text { ILos }}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)+\sum_{i \in \mathcal{N}_{\text {less }}} \tilde{\mathcal{Q}}_{i, l, O s}\left(\mathbf{q}, \mathbf{q}^{l-1}\right).\)       (33)

4.2 Distributed SAGE with Sensor Position Uncertainty

The centralized method needs a fusion center which may involve high operating costs and increase the risk of failure if the fusion center break down [37]. To this ends, we propose a distributed implementation of the proposed method. Specificially, an average consensus scheme is proposed to acquire the conditioned expectation of \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) locally at each sensor. As \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) are independent to each other, each sensor can calculate \(\tilde{Q}_{i}\) locally, i.e., \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)=\tilde{\mathcal{Q}}_{i, \backslash L O S}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) or \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)=\tilde{\mathcal{Q}}_{i, {L O S}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) . Then the global \(\tilde{\mathcal{Q}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) can be obtained as (33) to calculate the interested parameters in vector \(\left[\mathrm{q}_{x}, \mathrm{q}_{y}\right]^{T}\). It is clear that average consensus aims for exchanging the conditioned expectation of the local log-likelihood functions \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) between neighboring sensors. For the proposed distributed algorithm, we use p to represent the iteration index of the consensus algorithm among sensors, and the update process can be written as

\(\begin{aligned} \tilde{\mathcal{Q}}_{i}^{(p)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) &=\tilde{\mathcal{Q}}_{i}^{(p-1)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)+\sum_{k \in \mathcal{P}_{i}} \xi_{k i}\left(\left(\tilde{\mathcal{Q}}_{k}^{(p-1)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)-\tilde{\mathcal{Q}}_{i}^{(p-1)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\right)\right.\\ &=\xi_{i i} \tilde{\mathcal{Q}}_{i}^{(p-1)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)+\sum_{k \in \mathcal{P}_{l}} \xi_{k i} \tilde{\mathcal{Q}_{k}}^{(p-1)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right), \end{aligned}\)       (34)

which includes the metropolis update rate as

\(\xi_{k i}=\xi_{i k}=\left\{\begin{array}{ll} 1 / \max \left(\left|\mathcal{P}_{i}\right|,\left|\mathcal{P}_{k}\right|\right), & \text { for } i \neq k \\ 1-\sum_{k^{\prime} \in P_{k}} \xi_{k i}, & \text { for } i=k \end{array}\right.\)

where \(\mathcal{P}_{i}\) and \(\mathcal{P}_{k}\) denote the index sets of neighboring sensor of the sensor i and k , and \(\left|\mathcal{P}_{i}\right|\) and \(\left|\mathcal{P}_{k}\right|\) stand for the numbers of neighboring sensors. After \(N_{i t e r}^{\prime}\)  iterations, all sensors achieve consensus locally, i.e., 

\(\tilde{\mathcal{Q}}_{i}^{\left(\mathrm{N}_{\text {thr}}^{\prime}\right)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)=\tilde{\mathcal{Q}}_{j}^{\left(\mathrm{N}_{\text {urs}}^{\prime}\right)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) .

Finally, all sensors are able to update positions information simultaneously, i.e.,

\(\tilde{\mathcal{Q}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)=\mathrm{N} \cdot \tilde{\mathcal{Q}}_{i}^{\left(\mathrm{N}_{\text {trr}}\right)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right).\)       (35)

The convergence of average consensus has been proved for connected sensor networks in [38].

As \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) is continuous and sensors communicates with each other by wireless channels, particle-based methods are typically used to broadcast the \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) among sensors. To obtain precise localization, a large amount of particles are required, which increases the complexity significantly. As a quadratic polynomial form exists in both (26) and (32), we can approximate \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) as follows. For sensor i which obtains LOS/NLOS signal, we can unify the expression as

\(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \approx-\frac{\left(q_{x}-\bar{m}_{x_{i}}\right)^{2}}{\sigma_{x_{i}}^{2}}-\frac{\left(q_{y}-\bar{n}_{y_{i}}\right)^{2}}{\sigma_{y_{i}}^{2}}-\bar{o}_{x_{i} y_{i}} q_{x} q_{y}.\)       (36)

According to the quadratic polynomial approximation, in the average consensus operation, only the mean and variance of q and the coefficient \(\bar{o}_{x_i, y_{i}} \text { of } q_{x} q_{y}\) should be exchanged among sensors, which reduces the communication overhead significantly. Referring to the metropolis update rate, the local likelihood function at the p -th iteration of the consensus algorithm can be given as

\(\tilde{\mathcal{Q}}_{i}^{(p)}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \approx-\frac{\left(q_{x}-m_{i}(p)\right)^{2}}{\sigma_{x_{j}}^{2}(p)}-\frac{\left(q_{y}-n_{i}(p)\right)^{2}}{\sigma_{y_{j}}^{2}(p)}-o_{i}(p) q_{x} q_{y},\)       (37)

where \(m_{i}^{l}(p), \sigma_{m_{i}^{\prime}}^{2}(p), n_{i}^{l}(p), \sigma_{n_{i}^{l}}^{2}(p) \text { and } o_{i}^{l}(p)\) are updated as

\(m_{i}^{l}(p)=\sigma_{m_{i}^{l}}^{2}(p-1)\left(\frac{\xi_{i i} m_{i}^{l}(p-1)}{\sigma_{m_{i}^{l}}^{2}(t)}+\sum_{k \in N_{i}} \frac{\xi_{k i} m_{i}^{l}(p-1)}{\sigma_{x_{k}}^{2}(p-1)}\right),\)       (38)

\(n_{i}^{l}(p)=\sigma_{n_i^l}^{2}(p-1)\left(\frac{\xi_{i i} n_{i}^{l}(p-1)}{\sigma_{n_{i}^{l}}^{2}(t)}+\sum_{k \in \mathcal{N}_{i}} \frac{\xi_{k i} n_{i}^{l}(p-1)}{\sigma_{y_{k}}^{2}(p-1)}\right),\)        (39)

\(\sigma_{m_{i}^{l}}^{2}(p)=\left(\frac{\xi_{i i}}{\sigma_{m_{i}^{l}}^{2}(p-1)}+\sum_{k \in \mathcal{N}_{i}} \frac{\xi_{k i}}{\sigma_{x_{k}}^{2}(p-1)}\right)^{-1},\)        (40)

\(\sigma_{n_{i}^{l}}^{2}(p)=\left(\frac{\xi_{i i}}{\sigma_{n_{i}^{l}}^{2}(p-1)}+\sum_{k \in \mathcal{N}_{i}} \frac{\xi_{k i}}{\sigma_{y_{k}}^{2}(p-1)}\right)^{-1},\)        (41)

\(o_{i}^{l}(p)=\xi_{i i} o_{i}^{l}(p-1)+\sum_{k \in \mathcal{N}_{i}} \xi_{k i} o_{i}^{l}(p-1).\)        (42)

1) NLOS case: For the sensors which obtain NLOS components only, we adjust the
\(\tilde{\mathcal{Q}}_{i, \backslash \operatorname{LOS}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) 's expression and approximates it as follows

\(\tilde{\mathcal{Q}}_{i, \mathrm{\backslash LOS}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \approx-\frac{\left(q_{x}-\hat{m}_{i}^{l}\right)^{2}}{2 \sigma_{\hat{m}_{i}^{l}}^{2}} -\frac{\left(q_{y}-\hat{n}_{i}^{l}\right)^{2}}{2 \sigma_{\hat{n}_{i}^{l}}^{2}}-\hat{o}_{i}^{l} q_{x} q_{y},\)       (43)

where \(\hat{m}_{i}^{l}, \hat{n}_{i}^{l} \quad \text { and } \quad \hat{O}_{i}^{l}\) are expressed as

\(\hat{m}_{i}^{l}=\mathrm{I}_{i} /\left(a_{i, j}-a_{0}\right), \hat{n}_{i}^{l}=\mathrm{H}_{i} /\left(b_{i, j}-b_{0}\right),\)\(\sigma_{\hat{m}_{i}^{l}}^{2}=\sigma_{i}^{2} / M_{i}\left(a_{i, j}-a_{0}\right), \sigma_{\hat{n}_{i}^{l}}^{2}=\sigma_{i}^{2} / M_{i}\left(b_{i, j}-b_{0}\right) \text { and } \hat{o}_{i}^{l}=\frac{\mathrm{R}_{i}}{\sigma_{i}^{2} / M_{i}}\).

2) LOS case: For sensors which obtain LOS components, as the nonlinear term exists, we adopt the first order Taylor expansion in (32), which becomes

\(\tilde{\mathcal{Q}}_{i, \operatorname{LOS}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right) \approx-\frac{\left(q_{x}-\bar{m}_{i}^{l}\right)^{2}}{2 \sigma_{\bar{m}_{i}^{l}}^{2}}-\frac{\left(q_{y}-\bar{n}_{i}^{l}\right)^{2}}{2 \sigma_{\bar{n}_{i}^{l}}^{2}}-\bar{o}_{i}^{l} q_{x} q_{y}.\)       (44)

For the i -th iteration, the local function \(\tilde{\mathcal{Q}}_{i}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) is initialized based on (43) and (44).

After several iterations, each sensor gets the global function \(\tilde{\mathcal{Q}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\). We maximize \(\tilde{\mathcal{Q}}\left(\mathbf{q}, \mathbf{q}^{l-1}\right)\) to obtain q's final estimate, which can be written as

\(\mathbf{q}^{l}=\left[m_{i}\left(\mathrm{N}_{\text {iter}}^{\prime}\right), n_{i}\left(\mathrm{N}_{\text {iter}}^{\prime}\right)\right]^{T}\)       (45)

5. Performance Evaluation

5.1 Simulation Setup

We evaluate the performance of our proposed DA algorithm and localization algorithms in both centralized and distributed implementation. Without loss of generality, we adopt \(100 \times 100 m^{2}\) space with one target q and four sensors
\(S_{1}, \ldots, S_{4}\). The important parameters related to the simulations are summarized in Table 1. Sensors' positions are \(\mathbf{p}_{1}=\text{[20 20]}^{T}, \mathbf{p}_{2}=\text{[80 30]}^{T}, \mathbf{p}_{3}=\text{[60 90]}^{T}, \mathbf{p}_{4}=\text{[70 70]}^{T}\) and the standard deviation \(\sigma_{\Delta \mathbf{p}_{i}}\) in Table 1 is assumed from 1m ~ 3m. The number of scatters is set to L = 8 in Fig. 1. The corresponding orientations are \(\gamma=\left[0^{\circ}, 86^{\circ}, 150^{\circ}, 90^{\circ}, 111^{\circ}, 55^{\circ}, 135^{\circ}, 11^{\circ}\right]\), which hold smooth surfaces and long enough to reflect signals compared with other scatters. Furthermore, the ground-truth AOAs are \(\theta=\left[-35^{\circ}-135^{\circ} ; 168^{\circ}-60^{\circ} ; 30^{\circ} ;-155^{\circ} 152^{\circ}\right]\), which
corresponding to \(S_1\) to \(S_4\) , as illustrated in Fig. 1. In this simulation scenario, the number of measurements (\(\left|\mathcal{M}_{1}\right|\sim| \mathcal{M}_{4} |\)) that sensor nodes (\(S_{1} \sim S_{4}\)) received are set to \([2,2,1,2]^{T}\) respectively.

Table 1. Important Parameters

 

5.2 Localization Performance

We perform Monte Carlo simulations with 1,000 independent trials. A coarse localization of the proposed algorithm is evaluated according to the proposed DA method in section III, without considering sensor position uncertainty as in (10). The performance of the method in [10] is also plotted for comparison. It is seen in Fig. 2 that the localization error of the method in [10] is significantly greater than that of the proposed algorithm with DA. This is due to the fact that [10] does not perform the DA process, which makes the assumption that all measurements are mixture of LOS and NLOS. On the other hand, without considering the uncertainty of sensor position, the values of each submodel's weights become inaccurate, which increases the risk of mismatch in the DA process. Accordingly, we can see in Fig. 2 that the localization performance of the proposed DA method becomes worse when increasing sensor position uncertainty. By comparing the results in Fig. 2, the proposed algorithm with small sensor position uncertainty outperforms the method in [10] and performs close to the ideal one, which demonstrates the effectiveness of DA process.

 

Fig. 2. Coarse localization performance with different uncertainty of sensor position.

We evaluate the localization performance by taking into account sensors' uncertainties with (33) and (45) in Fig. 3, which corresponds to the proposed centralized and distributed SAGE methods, respectively. Comparing the results in Fig. 2 and Fig. 3, we can observe that the proposed SAGE algorithm shows superiors performance by jointly updating the target and the sensors' positions simultaneously. Moreover, it is seen in Fig. 3 that the distributed implementation performs very close to the centralized method, which demonstrates the effectiveness of the quadratic polynomial approximation and the averaged consensus employed in the distributed implementation.

 

Fig. 3. Localization performance of the proposed centralized and distributed implementations.



Fig. 4. The MSE performance of different estimators

In Fig. 4, the mean square error (MSE) performance of the proposed centralized
algorithm is compared with the state-of-the-art methods, such as the coarse localization estimator, the approximate maximum-likelihood (AML) estimator [39], and a Monte Carlo (MC)-based estimator with 2000 particles, where we assume that the standard deviation of sensors position uncertainty is \(\sigma_{\Delta \mathbf{p}_{i}}=1 \mathrm{m}\). Without considering sensors uncertainties and DA process, the coarse localization estimator gives the worst performance. Since the AML estimator treats the sensors position uncertainties as Delta functions after DA process, it shows larger MSE than the proposed algorithm. The MC-based estimator takes samples from sensor i 's position uncertainty's distribution, which transforms the intractable integration into the cumulative sum of finite terms. However, the superior performance of the MC-based estimator is at the cost of huge computational complexity. The proposed SAGE algorithm can perform close to the MC-based estimator with much lower complexity.

 

Fig. 5. Localization performance with different numbers of measurements combination.

Since the number of LOS and NLOS components may vary for different locations. In Fig. 5, we evaluate the impact of different number of LOS and NLOS components on the localization performance. We assume the standard deviation of sensors position uncertainty is \(\sigma_{\Delta \mathbf{p}_{i}}=1 \mathrm{m}\). The four positions (90 40), (40 50), (40 10), (80 20) are named as

• Case 1, a relatively ideal scenario: 4 LOS signals and 5 NLOS signals at (90 40).

• Case 2, a general situation of the complex scenario: 1 LOS signal and 5 NLOS signals at (40 50).

• Case 3, a severe NLOS situation: with 5 NLOS signals at (40 10).

• Case 4, an extreme scenario: 1 LOS signal and 2 NLOS signals at (80 20).

It is observed in Fig. 5 that both the centralized and distributed implementation of the proposed algorithm can work with different combinations of measurements. The localization performance of Case 4 is worse than that of Case 3, although the former has 1 LOS measurement. This indicates that enough number of  measurements are useful for localization and the utilization of the NLOS MPCs can be beneficial. Comparing Case 3 with Case 2, it is seen that the LOS signals can further improve the performance since the LOS signal has no AOA measurement error. When increasing the number of LOS measurements, e.g., from Case 2 to Case 1, localization performance can be improved. However, the improvement becomes marginal when giving sufficient number of NLOS signals.

6. Conclusion

In this paper, a novel expectation maximization (EM)-based data association process was proposed to solve the target localization problem in multipath environments. Considering the position uncertainty of sensors, we proposed a SAGE-based algorithm using TDOA and AOA measurements to jointly locate the target and update the position of sensors. The KLD minimization was employed to approximate the distribution of sensor uncertainty by a 2-D circularly symmetric Gaussian distribution. Then, Taylor expansion was used to linearize the specific nonlinear term induced by the first kind confluent hypergeometric function in E-step. Accordingly, a closed-form expression was derived with low computational requirements. To improve the scalability of the proposed algorithms in complex scenarios, we further derived a distributed implementation based on the average consensus. Simulation results showed that the proposed algorithms performed close to the Monte Carlo-based algorithm with much lower communication overhead and complexity. The distributed implement achieved an accuracy close to the proposed centralized SAGE-based algorithm.

Appendix A

In this appendix, we give a further derivation of the KLD in (23). Here, we have

\(\begin{aligned} &\int f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}_{i}, \overline{\mathbf{p}}_{i}, \mathbf{q}^{l-1}\right) \ln f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}_{i}, \overline{\mathbf{p}}_{i}, \mathbf{q}^{l-1}\right) d \Delta \mathbf{p}_{i}\\ &=\int \frac{1}{2 \pi \sigma_{\Delta \mathrm{p}_{\mathrm{i}}}^{2}} \exp \left(-\frac{\left\|\Delta \mathbf{p}_{i}-\Delta \overline{\mathbf{p}}_{i}\right\|^{2}}{2 \sigma_{\mathrm{Ap}_{\mathrm{i}}}^{2}}\right) \times\left(\ln \frac{1}{2 \pi \sigma_{\mathrm{Ap}_{\mathrm{i}}}^{2}}-\frac{\left\|\Delta \mathbf{p}_{i}\right\|^{2}}{2 \sigma_{\mathrm{Ap}_{\mathrm{i}}}^{2}}\right) d \Delta \mathbf{p}_{i}=\ln \frac{1}{2 \pi \sigma_{\mathrm{Ap}_{\mathrm{i}}}^{2}}-1 \end{aligned}\)

       (46)

The latter part can be derived as

\(\begin{aligned} &\int f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}_{i}, \overline{\mathbf{p}}_{i}, \mathbf{q}^{l-1}\right) \ln p\left(\Delta \mathbf{p}_{i}\right) d \Delta \mathbf{p}_{i}\\ &=\int \frac{1}{2 \pi \sigma_{\Delta \mathrm{p}_{i}}^{2}} \exp \left(-\frac{\left\|\Delta \mathbf{p}_{i}\right\|^{2}}{2 \sigma_{\Delta \mathrm{p}_{i}}^{2}}\right) \times\left\{\ln \frac{1}{2 \pi \sigma_{\Delta_{x_i}} \sigma_{\Delta_{y_i}} \sqrt{1-\rho^{2}}}-\frac{1}{2\left(1-\rho^{2}\right)}\right.\\ &\left.\times\left[\frac{\Delta x_{i}^{2}}{\sigma_{\Delta_{x_{i}}}^{2}}+\frac{\Delta y_{i}^{2}}{\sigma_{\Delta_{y_{i}}}^{2}}-\frac{2 \rho \Delta x_{i} \Delta y_{i}}{\sigma_{\Delta_{x_{i}}} \sigma_{\Delta_{y_{i}}}}\right]\right\} d \Delta \mathbf{p}_{i}. \end{aligned}\)       (47)

The last part can be formulated as

\(\begin{array}{l} \int f\left(\Delta \mathbf{p}_{i} | \Delta \hat{\mathbf{d}}, \overline{\mathbf{p}}_{i}, \mathbf{q}^{l-1}\right) \ln \prod_{j \in \mathcal{M}_{i}} p\left(\Delta \hat{d}_{i, j}, \hat{\theta}_{i, j} | \mathbf{q}^{l-1}, \overline{\mathbf{p}}_{i}, \Delta \mathbf{p}_{i}, \theta_{i, j}\right) d \Delta \mathbf{p}_{i} \\ \left.=\int \frac{1}{2 \pi \sigma_{\mathrm{Ap}_{i}}^{2}} \exp \left(-\frac{\left\|\Delta \mathbf{p}_{i}\right\|^{2}}{2 \sigma_{\mathrm{Ap}_{i}}^{2}}\right) \times \sum_{j=1}^{M_{l}} \ln \frac{p\left(\hat{\theta}_{i, j} | \theta_{i, j}\right) \alpha_{i, j, k}^{l}}{\sqrt{2 \pi \sigma_{i}^{2}}}-\frac{1}{2 \sigma_{i}^{2}}\left(D_{i, j}-\Delta \hat{d}_{i, j}\right)^{2}\right] d \Delta \mathbf{p}_{i} \end{array}\),       (48)

where 

\(D_{i, j}=\Delta \hat{d}_{i, j}+\mathrm{g}\left(\theta_{1,1}, \gamma_{1,1}^{\prime}\right)^{T}\left(\mathbf{q}^{l-1}-\mathbf{p}_{1}\right), \Delta \mathbf{p}_{i}=\left(\Delta x_{i}, \Delta y_{i}\right), \quad \overline{\mathbf{p}}_{i}=\left(\bar{x}_{i}, \bar{y}_{i}\right), \quad \mathbf{q}^{l-1}=\left(q_{x}^{l-1}, q_{y}^{l-1}\right),\)

Depending on the LOS or NLOS components, we further express (48) in the following subsections.
1) NLOS case: For the NLOS components in sensor \(i, d_{i, j}\) equals the item in (43). Then the integration in (48) can be calculated after some mathematical operations as

\(\begin{array}{l} \sum_{j=1}^{M_{i}} \frac{1}{2 \sigma_{i}^{2}} \int \frac{1}{2 \pi \sigma_{\Delta_{\mathrm{p}_{i}}}^{2}} \exp \left(-\frac{\left\|\Delta \mathbf{p}_{i}\right\|^{2}}{2 \sigma_{\Delta \mathrm{p}_{i}}^{2}}\right)\left(2 D_{i, j} \Delta \hat{d}_{i, j}-\Delta \hat{d}_{i, j}^{2}\right) d \Delta \mathbf{p}_{i} \\ =\sum_{j=1}^{M_{i}} \frac{1}{2 \sigma_{i}^{2}}\left[2 E_{i, j} \Delta x_{i}+2 F_{i, j} \Delta y_{i}-2 a_{i, j} b_{i, j} \Delta x_{i} \Delta y_{i}\right. \\ \left.-a_{i, j}^{2}\left(\sigma_{\Delta \overline{\mathrm{p}}}^{2}+\Delta x_{i}^{2}\right)-b_{i, j}^{2}\left(\sigma_{\mathrm{Ap}_{\mathrm{i}}}^{2}+\Delta y_{i}^{2}\right)\right]+\mathcal{C}, \end{array}\)       (49)


where \(E_{i, j}=a_{i, j}^{2}\left(x_{\mathbf{q}}^{l-1}-x_{\overline{\mathbf{p}}_{i}}^{l-1}\right)+a_{i, j} b_{i, j}\left(y_{\mathbf{q}}^{l-1}-y_{\overline{\mathbf{p}}_{i}}^{l-1}\right)-D_{i, j}, F_{i, j}=b_{i, j}^{2}\left(y_{\mathbf{q}}^{l-1}-y_{\overline{\mathbf{p}}_{i}}^{l-1}\right)+a_{i, j} b_{i, j}\left(x_{\mathbf{q}}^{l-1}\right. \left.-x_{\overline{\mathbf{p}}_{i}}^{l-1}\right)-D_{i, j},\)

2) LOS case: For sensor i which obtains LOS signal, we will process as (28). As the
integral in (48) includes nonlinear terms, we use the approximation of the first-order Taylor expansion. So, the integration in (48) can be calculated after some mathematical operations as

\(\begin{aligned} &\sum_{j=1}^{M_{i}} \frac{1}{2 \sigma_{i}^{2}} \int \frac{1}{2 \pi \sigma_{\mathrm{Ap}_{i}}^{2}} \exp \left(-\frac{\left\|\Delta \mathbf{p}_{i}\right\|^{2}}{2 \sigma_{\Delta \mathrm{p}_{i}}}\right)\left(2 D_{i, j} \Delta \hat{d}_{i, j}-\Delta \hat{d}_{i, j}^{2}\right) d \Delta \mathbf{p}_{i}\\ &=\sum_{j=1}^{M_{i}}\left[\frac{D_{i, j}}{\sigma_{i}^{2}}\left(\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|+\frac{\bar{x}_{i}-q_{x}^{\prime}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|} \Delta x_{i}+\frac{\bar{y}_{i}-q_{y}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|} \Delta y_{i}\right)\right.\\ &\left.-\frac{1}{2 \sigma_{i}^{2}}\left(2 \sigma_{\Delta \mathrm{p}_{i}}^{2}+\left\|\overline{\mathbf{p}}_{i}+\Delta \mathbf{p}_{i}-\mathbf{q}^{l-1}\right\|^{2}\right)\right] \end{aligned}\)       (50)

Appendix B

In this appendix, we give the detail of \(\Delta \mathbf{p}^{l}=\left(\Delta x_{i}^{l}, \Delta y_{i}^{l}\right)\),

\(\begin{array}{l} \Delta x_{i}^{l}=\frac{\frac{\Delta y_{i}^{l-1}}{\left(1-\rho^{2}\right) \sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}}}+\sum_{j=1}^{M_{i}} \frac{1}{\sigma_{i}^{2}}\left(E_{i, j}-a_{i, j} b_{i, j} \Delta y_{i}^{l-1}\right)}{\frac{1}{\left(1-\rho^{2}\right) \sigma_{\Delta_{x_{i}}}^{2}}-\frac{1}{\sigma_{i}^{2}} \sum_{j=1}^{M_{i}} a_{i, j}^{2}}, \\ \Delta y_{i}^{l}=\frac{\frac{x_{i}^{l-1}}{\left(1-\rho^{2}\right) \sigma_{\Delta_{x}} \sigma_{\Delta y_{i}}}+\sum_{j=1}^{M_{i}} \frac{1}{\sigma_{i}^{2}}\left(F_{i, j}-a_{i, j} b_{i, j} \Delta x_{i}^{l-1}\right)}{\frac{1}{\left(1-\rho^{2}\right) \sigma_{\Delta y_{i}}^{2}}-\frac{1}{\sigma_{i}^{2}} \sum_{j=1}^{M_{i}} b_{i, j}^{2}}, \end{array}\)

\(\sigma_{\Delta \mathbf{p}_{i}}=\sqrt{\frac{2\left(1-\rho^{2}\right)}{\left(1-\rho^{2}\right) \sigma_{\Delta x_{i}}^{2} \sigma_{\Delta y_{i}}^{2} \sum_{j=1}^{M_{i}}\left(a_{i, j}^{2}+b_{i, j}^{2}\right)+\sigma_{i}^{2}\left(\sigma_{\Delta_{i}}^{2}+\sigma_{\Delta y_{i}}^{2}\right)}} \sigma_{i} \sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}},\)       (51)

where

\(\mathrm{H}_{i, j}=\Delta \hat{d}_{i, j}+a_{i, j} \bar{x}_{i}+b_{i, j} \bar{y}_{i}-a_{0} x_{0}-b_{0} y_{0},\)       (52)


\(\mathrm{K}_{i, j}=a_{i, j} b_{i, j}+a_{0} b_{i, j}+a_{i, j} b_{0}+a_{0} b_{0}.\)       (53)

As for the LOS cases in the centrialized algorithm, we have \(\Delta \mathbf{p}^{l}=\left(\Delta x_{i}^{l}, \Delta y_{i}^{l}\right)\) as

\(\begin{aligned} x_{i}^{l}=&\left(D_{i, j}-\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|\right) \times\left[\left(\frac{\sigma_{i}^{2} \sigma_{\mathrm{\Delta x}_{i}}^{2}+\left(1-\rho^{2}\right) \sigma_{\mathrm{\Delta r}_{i}}^{2} \sigma_{\Delta y_{i}}^{2}}{\sigma_{i}^{2}\left(\sigma_{i}^{2}+\sigma_{\Delta x_{i}}^{2}+\sigma_{\Delta y_{i}}^{2}\right)+\left(1-\rho^{2}\right) \sigma_{\Delta x_{i}}^{2} \sigma_{\Delta j_{i}}^{2}}\right) \frac{\bar{x}_{i}-q_{x}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|}\right.\\ &\left.+\left(\frac{\rho \sigma_{i}^{2} \sigma_{\mathrm{Ar}} \sigma_{\mathrm{A}_{i j}}}{\sigma_{i}^{2}\left(\sigma_{i}^{2}+\sigma_{\Delta x_{i}}^{2}+\sigma_{\Delta y_{i}}^{2}\right)+\left(1-\rho^{2}\right) \sigma_{\mathrm{\Delta x}_{1}}^{2} \sigma_{\Delta y_{i}}^{2}}\right) \frac{\bar{y}_{i}-q_{y}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|}\right] \end{aligned}\)

\(\begin{aligned} y_{i}^{l}=&\left(D_{i, j}-\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|\right) \times\left[\left(\frac{\rho \sigma_{i}^{2} \sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}}}{\sigma_{i}^{2}\left(\sigma_{i}^{2}+\sigma_{\Delta x_{i}}^{2}+\sigma_{\Delta y_{i}}^{2}\right)+\left(1-\rho^{2}\right) \sigma_{\Delta x_{i}}^{2} \sigma_{\Delta y_{i}}^{2}}\right)\right.\\ \left.\frac{\bar{x}_{i}-q_{x}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|}+\left(\frac{\sigma_{i}^{2} \sigma_{\Delta y_{i}}^{2}+\left(1-\rho^{2}\right) \sigma_{\Delta x_{i}}^{2} \sigma_{\Delta y_{i}}^{2}}{\sigma_{i}^{2}\left(\sigma_{i}^{2}+\sigma_{\Delta_{i}}^{2}+\sigma_{\Delta_{i}}^{2}\right)+\left(1-\rho^{2}\right) \sigma_{\Delta_{i}}^{2} \sigma_{\Delta y_{i}}^{2}}\right) \frac{\bar{y}_{i}-q_{y}^{l-1}}{\left\|\overline{\mathbf{p}}_{i}-\mathbf{q}^{l-1}\right\|}\right] \\ \sigma_{\mathrm{Ap}_{i}}=\sqrt{\frac{2\left(1-\rho^{2}\right)}{\sqrt{2\left(1-\rho^{2}\right) \sigma_{\Delta x_{i}}^{2} \sigma_{\Delta y_{i}}^{2}+\sigma_{i}^{2}\left(\sigma_{\Delta x_{i}}^{2}+\sigma_{\Delta y_{i}}^{2}\right)}}} \sigma_{i} \sigma_{\Delta x_{i}} \sigma_{\Delta y_{i}} \end{aligned} \)       (54)

 

References

  1. F. Zhao and L. J. Guibas, "Wireless Sensor Networks: An Information Processing Approach," The Morgan Kaufmann Series in Networking, pp. iii, 2004.
  2. I. F. Akyildiz, W. Su, and Y. Sankarasubramaniam, "Wireless Sensor Networks: A survey," Comput. Netw. J., vol. 38, pp. 393-422, Mar. 2002. https://doi.org/10.1016/S1389-1286(01)00302-4
  3. K. Witrisal, P. Meissner, and E. Leitinger, "High-Accuracy Localization for Assisted Living: 5G systems will turn multipath channels from foe to friend," IEEE Signal Processing Mag., vol. 33, no. 2, pp. 59-70, 2016. https://doi.org/10.1109/MSP.2015.2504328
  4. S. Gezici, G. Giannakis, H. Kobayashi, A. Molisch, H. Poor, and Z. Sahinoglu, "Localization via ultra-wideband radios: a look at positioning aspects for future sensor networks," IEEE Signal Processing Mag., vol. 22, no. 4, pp. 70-84, 2005.
  5. M. Shon, M. Jo, H. Choo, "An interactive cluster-based MDS localization scheme for multimedia information in wireless sensor networks," Comput. Commun., vol. 35, no. 15, pp. 1921-1929, Sept. 2012. https://doi.org/10.1016/j.comcom.2012.05.002
  6. H. Wymeersch, J. Lien, and M. Z. Win, "Cooperative localization in wireless networks," Proc. IEEE, vol. 97, no. 2, pp. 427-450, 2009. Article (CrossRef Link) https://doi.org/10.1109/JPROC.2008.2008853
  7. Y. Yu, "Consensus-Based Distributed Mixture Kalman Filter for Maneuvering Object Tracking in Wireless Sensor Networks," IEEE Trans. Veh. Technol., vol. 65, no. 10, pp. 8669-8681, 2016. https://doi.org/10.1109/TVT.2015.2508456
  8. V. Tran-Quang, T. Ngo-Quynh, M . Jo, "A Lateration-localizing Algorithm for Energy-efficient Target Tracking in Wireless Sensor Networks," Ad. Hoc. Sens. Wirel. Ne., vol. 34, no. 1-4, pp. 191-220, Dec. 2016.
  9. G. Grisetti, R. KuMmerle, and C. Stachniss, "A tutorial on graph-based SLAM," IEEE Intell. Transp. Syst. Mag., vol. 2, no. 4, pp. 31-43, 2010. https://doi.org/10.1109/MITS.2010.939925
  10. F. Yin, C. Fritsche, F. Gustafsson, and A. M. Zoubir, "TOA-based robust wireless geolocation and Cramr-Rao lower bound analysis in harsh LOS/NLOS environments," IEEE Trans. Signal Processing., vol. 61, no.9, pp. 2243-2255, 2013. https://doi.org/10.1109/TSP.2013.2251341
  11. M. Z. Win, W. Dai, Y. Shen, G. Chrisikos and H. Vincent Poor, "Network Operation Strategies for Efficient Localization and Navigation," Proc. IEEE, vol. 106, no. 7, pp. 1224-1254, July. 2018. https://doi.org/10.1109/JPROC.2018.2835314
  12. M. Z. Win, Y. Shen and W. Dai, "A Theoretical Foundation of Network Localization and Navigation," Proc. IEEE, vol. 106, no. 7, pp. 1136-1165, July 2018. https://doi.org/10.1109/JPROC.2018.2844553
  13. C. K. Seow and S. Y. Tan, "Non-Line-of-Sight Localization in Multipath Environments," IEEE Trans. Mob. Comput., vol. 7, no. 5, pp. 647-660, 2008. https://doi.org/10.1109/TMC.2007.70780
  14. L. Cong and W. Zhuang, "Non-line-of-sight error mitigation in TDOA mobile location," in Proc. of Global Telecommunications Conference(GLOBECOM), vol.1, pp. 680-684, Nov. 2001.
  15. L. Yi, S. G. Razul, Z. Lin, and C-M. See, "Individual aoameasurement detection algorithm for object tracking in mixed LOS/NLOS environments," in Proc. of IEEE Int. Conf. Acoustics, Speech and Signal Processing, pp. 3924-3928, 2013.
  16. Y. Shen, M. Z. Win, "On the use of multipath geometry for wideband cooperative localization," in Proc. of Global Telecommunications Conference(GLOBECOM), pp. 1-6, 2009.
  17. W. Yuan, N. Wu, B. Etzlinger, H. Wang and J. Kuang, "Cooperative Joint Localization and Clock Synchronization Based on Gaussian Message Passing in Asynchronous Wireless Networks," IEEE Trans. Veh. Technol., vol. 65, no. 9, pp. 7258-7273, Sept. 2016. https://doi.org/10.1109/TVT.2016.2518185
  18. W. Dai, Y. Shen and M. Z. Win, "Energy-Efficient Network Navigation Algorithms," IEEE. J. Sel. Areas. Commun., vol. 33, no. 7, pp. 1418-1430, July. 2015. https://doi.org/10.1109/JSAC.2015.2430271
  19. E. Leitinger, P. Meissner, and C. Rudisser, "Evaluation of Position-related Information in Multipath Components for Indoor Positioning," IEEE J. Sel. Areas. Commun., vol. 33, no. 11, pp. 2313-2328, 2015. https://doi.org/10.1109/JSAC.2015.2430520
  20. J. Li, J. Conan, and S. Pierre, "Mobile terminal location for MIMO communication systems," IEEE Trans. Antennas Propagat., vol. 55, no. 8, pp. 2417-2420, 2007. https://doi.org/10.1109/TAP.2007.901862
  21. H. Miao, K. Yu, and M. Juntti, "Positioning for NLOS Propagation: Algorithm Derivations and CramerRao Bounds," IEEE Trans. Veh. Technol., vol. 56, no. 5, pp. 2568-2580, 2007. https://doi.org/10.1109/TVT.2007.899948
  22. W. Xu, F. Quitin, M. Leng, W. P. Tay and S. Razul, "Distributed Localization of a RF Target in NLOS Environments," IEEE J. Sel. Areas Commun. vol. 33, no. 7, pp. 1317-1330, July. 2015. https://doi.org/10.1109/JSAC.2015.2430152
  23. A. Athalye, V. Savic, and M. Bolic, "Novel Semi-Passive RFID System for Indoor Localization," IEEE Sensors J., vol. 13, no. 2, pp. 528-537, 2013. https://doi.org/10.1109/JSEN.2012.2220344
  24. Y. Xiong, N. Wu, Y. Shen and M. Z. Win, "Cooperative Network Synchronization: Asymptotic Analysis," IEEE Trans. Signal Processing., vol. 66, no. 3, pp. 757-772, Feb. 2018. https://doi.org/10.1109/TSP.2017.2759098
  25. Y. Xiong, N. Wu, H. Wang and J. Kuang, "Cooperative Detection-Assisted Localization in Wireless Networks in the Presence of Ranging Outliers," IEEE. Trans. Commun., vol. 65, no. 12, pp. 5165-5179, Dec. 2017. https://doi.org/10.1109/TCOMM.2017.2744641
  26. T. Wang, Y. Shen, A. Conti and M. Z. Win, "Network Navigation With Scheduling: Error Evolution," IEEE. Trans. Inf. Theory., vol. 63, no. 11, pp. 7509-7534, Nov. 2017. https://doi.org/10.1109/TIT.2017.2717582
  27. R. Olfati-Saber, A. Fax and M. R. Murray, "Consensus and Cooperation in Networked Multi-Agent Systems," Proc. IEEE, vol. 95, no. 1, pp. 215-233, 2007. https://doi.org/10.1109/JPROC.2006.887293
  28. O. Hlinka, O. Sluciak, F. Hlawatsch, P. M. Djuric, and M. Rupp, "Likelihood consensus and its application to distributed particle filtering," IEEE Trans. Signal Processing., vol. 60, no. 8, pp. 4334-4349, 2012. https://doi.org/10.1109/TSP.2012.2196697
  29. A. G. Dimakis, S. Kar, and J. M. F. Moura, "Gossip Algorithms for Distributed Signal Processing," Proc. IEEE, vol. 98, no. 11, pp. 1847-1864, 2010. https://doi.org/10.1109/JPROC.2010.2052531
  30. F. Yin, C. Fritsche, and D. Jin, "Cooperative Localization in WSNs Using Gaussian Mixture Modeling: Distributed ECM Algorithms," IEEE Trans. Signal Processing., vol. 63, no. 6, pp. 1448-1463, 2015. https://doi.org/10.1109/TSP.2015.2394300
  31. J. A. Fessler and A. O. Hero, "Space-Alternating Generalized Expectation-Maximization Algorithm," IEEE Trans. Signal Processing., vol. 42, no. 10, pp. 2664-2677, 1994. https://doi.org/10.1109/78.324732
  32. B. Li, N. Wu, and H. Wang, "Expectation-maximisation-based localisation using anchors with uncertainties in wireless sensor networks," IET Commun., vol. 8, pp. 1977-1987, 2014. https://doi.org/10.1049/iet-com.2014.0025
  33. C. Constantinopoulos, M. K. Titsias and A. Likas, "Bayesian feature and model selection for Gaussian mixture models," IEEE Trans. Pattern Anal. Machine Intell., vol. 28, no. 6, pp. 1013-1018, June. 2006. https://doi.org/10.1109/TPAMI.2006.111
  34. T. M. Cover and J. A. Thomas, "Elements of Information Theory," 1st ed. New York: Wiley, 2001.
  35. S. O. Rice, "Mathematical analysis of random noise," The Bell System Technical Journal, vol. 23, no. 3, pp. 282-332, July. 1944. https://doi.org/10.1002/j.1538-7305.1944.tb00874.x
  36. I. S. Gradshteyn and I. M. Ryzhik, "Table of integrals, series, and products," Academic Press, London, 1980.
  37. W. Yuan, N. Wu, B. Etzlinger, Y. Li, C. Yan and L. Hanzo, "Expectation Maximization-Based Passive Localization Relying on Asynchronous Receivers: Centralized versus Distributed Implementations," IEEE Trans. Commun., vol. 67, no. 1, pp. 668-681, 2019 https://doi.org/10.1109/TCOMM.2018.2866478
  38. R. Olfati-Saber, E. Franco, E. Frazzoli, and J. S. Shamma, "Belief consensus and distributed hypothesis testing in Sensor Networks," Lecture Notes in Control and Information Sciences, vol. 331, pp. 169-182, 2006.
  39. A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via the EM algorithm," J. Royal Stat. Soc., Ser.B, vol. 39, no. 1, pp. 1-38, 1977.