# 1. Introduction

Linear Variable Displacement Transducer (LVDT), a patent by G.B.Hoadley in 1940. It is arranged with two sets of coil, one as the primary and the other secondary having two coils connected differentially for providing the output. The coupling between the primary and the secondary coils varies with the core plunger moving linearly and the output differential voltage varies linearly. The displacement produced by the plunger will be gained by calculating the differential voltage. So LVDT is widely used in the measurement and control system which is associated with displacement. Because of the mechanism structure and others, LVDT often exhibit inherent nonlinear input-output characteristics. Complicated and accurate winding machines are used to solve this. It is difficult to have all LVDT to be equally linear. Nonlinearity also arises in due to change in environment conditions such as temperature and humidity. Due to such nonlinearities direct digital readout is not possible. Their usable range gets restricted due to the presence of nonlinearity. If a transducer is used for full range of its nonlinear characteristics, accuracy and sensitivity of measurement is severely affected. The nonlinearity present is usually time-varying and unpredictable as it depends on many uncertain factors.

Literature survey suggests that in [1], the authors proposed a Functional Link Artificial Neural Network (FLANN) with the practical setup for the development of a linear LVDT. In the conventional design, sophisticated and precise winding machines are used to achieve the nonlinearity compensation [2-4]. Some digital signal processing techniques have been suggested to achieve better sensitivity and to implement the signal conditioning circuits [5, 6, 13]. It is reported in [7-9] that the artificial neural network (ANN)-based inverse model can effectively compensate for the nonlinearity effect of the sensors. LVDTs show a nonlinearity behavior when the core is moved toward any one of the secondary coils. In the primary coil region (middle) of the characteristics, core movement is almost linear. Because of that, the range of operation is limited within the primary coil. The nonlinearity estimation and its compensation in the case of a capacitive pressure sensor and an LVDT using different ANNs are proposed in [7-9]. In [14], compensation of Capacitive Pressure Sensor (CPS) nonlinearities is done using neurofuzzy algorithms. In [15], Calibration of CPS is discussed using circuits. In [16], calibration of CPS is done using least square support vector regression, and for temperature compensation one more CPS is used. In [17], extension of linearity is achieved using Hermite neural network algorithm. In [18], Chebyshev neural network algorithm is used for extension of linearity. In [19], non linearity of CPS is compensated by using Hybrid Genetic Algorithm- Radial Basis Function neural network (HGA-RBF). In [20], calibration of CPS is done using DSP algorithms. In [21], Functional Link ANN (FLANN) algorithm is used for calibrations of CPS. In [22], Laguerre neural network is used for calibration of CPS. In [23], Calibration of CPS is achieved using ANN. Adaptation to physical properties of diaphragm, and temperature is also discussed. In [24], relation between diaphragm properties and CPS output is discussed. In [25], effect of dielectric properties on CPS output is discussed. In [26], effect of temperature on CPS output is discussed. An intelligent pressure measurement technique is proposed as an improvement to the earlier reported works [23]. The technique is designed to obtain full scale linearity of input range and makes the output adaptive to variations in physical properties of diaphragm, dielectric constant, and temperature, all using the optimized ANN model.

This paper is organized as follows: after introduction in Section 1, a brief description on LVDT is given in Section 2. Specifications and experimental observations of two different LVDTs are also discussed in this section. Section 3 deals with the mathematical analysis of ELM, DE and GA. The computer simulation study of the proposed models by using the experimental data of two different LVDTs are carried out in this Section. Results and discussion with output performance curves before and after compensation of nonlinearity using the specified algorithms are mentioned in Section 4. Finally conclusion and future scope are discussed in Section 5.

# 2. Linear Variable Displacement Transducer (LVDT)

The LVDT consists of a primary coil and two secondary coils. The two secondary coils are connected differentially for providing the output. The secondary coils are located on the two sides of the primary coil on the bobbin or sleeve, and these two output windings (secondary coils) are connected in opposition to produce zero output at the middle position of the armature.

The lengths of primary and two identical halves of the secondary coils are b and m, respectively. The coils have an inside radius ri and an outside radius of ro. The spacing between the coils is d. Inside the coils, a ferromagnetic armature of length La and radius ri (neglecting the bobbin thickness) moves in an axial direction. The number of turns in the primary coil is np, and ns is the number of turns in each secondary coils. The cross-sectional view of LVDT is shown in Fig. 1. With a primary sinusoidal excitation voltage Vp and a current Ip (RMS) of frequency f, the RMS voltage v1 induced in the secondary coil S1 is

**Fig. 1.**Cross-sectional view of LVDT

and that in coil S2 is

Where

x1 − distance penetrated by the armature toward the secondary coil S1 ; x2 − distance penetrated by the armature toward the secondary coil S2

The differential voltage v = v1 − v2 is thus given by

Where is the armature displacement and

k2 is a nonlinearity factor in (3), with the nonlinearity term ∈ being

The nonlinearity factor and nonlinearity term of (6) and (7) are calculated from the core movement of the LVDT. These two terms depends on the geometric parameters taken from the corresponding LVDT. For a given accuracy and maximum displacement, the overall length of the transducer is minimum for x1=b, assuming that at maximum displacement, the armature does not emerge from the secondary coils. Taking the length of armature La = 3b + 2d , neglecting 2d compared with b, and using (4), (3) can be simplified as

For a given primary sinusoidal excitation, the secondary output voltage v is nonlinear with respect to displacement x. This is shown in Fig. 2 in which the linear region of the plot is indicated as xm.

**Fig. 2.**Range of linear region of LVDT

This limitation is inherent in all differential systems, and methods of nonlinearity compensation are proposed mainly by appropriate design and arrangement of the coils. Some of these are given as follows.

1) Balanced linear tapered secondary coils: improvement in linearity range is not significant 2) Overwound linear tapered secondary coils: linearity is improved to a certain range 3) Balanced overwound linear tapered secondary coils: the range specification is similar to 2) 4) Balanced profile secondary coils: helps in extending linearity range by proper profiling of the secondary coils 5) Complementary tapered windings method: extends the linearity range as well, but the winding is quite complicated as sectionalized winding is done [1]

## 2.1 Linearity

One of the best characteristics of a transducer is considered to be linearity, that is, the output is linearly proportional to the input. The computation of linearity is done with reference to a straight line showing the relationship between output and input. This straight line is drawn by using the method of least squares from the given calibration data. This straight line is sometimes called an idealized straight line expressing the input-output relationship. The linearity is simply a measure of maximum deviation of any of the calibration points from this straight line.

Fig. 3 shows the actual calibration curve i.e., a relationship between input and output and a straight line drawn from the origin using the method of least squares.

**Fig.3.**Actual calibration curve

Eq. (9) expresses the nonlinearity as a percentage of full scale reading. It is desirable to keep the nonlinearity as small as possible as it would in that case result in small errors.

## 2.2 Geometric parameters and experimental observations of LVDT

The performance of the LVDT is highly influenced by transducer geometry, arrangement of primary and secondary windings, quality of core material, variations in excitation current and frequency, and changes in ambient and winding temperatures. The geometric parameters and specifications of a conventional LVDT is listed in the below Table 1.

In this research work, we have taken the performance of two different LVDTs. The experimental data are collected from two different LVDTs having the specifications listed in Table 1. The variable parameters of the conventional LVDT are choosen as lowest range for LVDT-1 and highest range for LVDT-2. The data obtained by conducting experiments on the two LVDTs are given in Table 2 and Table 3. The output response curves of two LVDTs are shown in Figs. 4 and Fig. 5. It is clear that the output response of the two LVDTs shows the presence of nonlinearity.

**Table 1.**Geometric parameters and specifications of LVDT

**Table 2.**Experimental observations of LVDT-1

**Table 3.**Experimental observations of LVDT-2

**Fig. 4.**Input-Output Response of LVDT-1 %of Nonlinearity for LVDT1 = ×100 = 27.70%

**Fig. 5.**Input-Output Response of LVDT-2 of Nonlinearity for LVDT2 = ×100 = 21.66%

The percentage of nonlinearity is calculated using the Eq. (9). The lowest range choosen LVDT (LVDT-1) having highest percentage of nonlinearity when compared with the highest range LVDT (LVDT-2). So it is necessary to compensate the percentage of nonlinearity present in both LVDTs.

It has been observed from the above graphs (Fig. 4 and Fig. 5), that the relation between input displacement and voltage output of LVDT are nonlinear. The following algorithms are used to compensate the nonlinearity of two different LVDTs in this work.

AL-1: Extreme Learning Machine Method (ELM) AL-2: ANN trained by Differential Evolution algorithm (ANN-DE) AL-3: ANN trained by Genetic Algorithm (GA-ANN)

# 3. Nonlinearity Compensation using Soft Computing Techniques

## 3.1 Extreme learning machine based nonlinearity compensation

Extreme Learning Machine (ELM) is a simple tuningfree three-step algorithm. The learning speed of is extremely fast. The hidden node parameters are not only independent of the training data but also of each other. Unlike conventional learning methods which must see the training data before generating the hidden node parameters, ELM could generate the hidden node parameters before seeing the training data. Unlike traditional gradient-based learning algorithms which only work for differentiable activation functions, ELM works for all bounded nonconstant piecewise continuous activation functions. Unlike traditional gradient-based learning algorithms facing several issues like local minima, improper learning rate and over fitting, etc, ELM tends to reach the solutions straight-forward without such trivial issues. The ELM learning algorithm looks much simpler than many learning algorithms: neural networks and support vector machines. It is efficient for batch mode learning, sequential learning and incremental learning. It provides a unified learning model for regression, binary / multi-class classification. It also works with different hidden nodes including random hidden nodes (random features) and kernels.

## 3.2 Single hidden layer feed-forward neural network

Recently, Huang et al [31, 32] proposed a new learning algorithm for Single Layer Feed forward Neural Network architecture called Extreme Learning Machine (ELM) which overcomes the problems caused by gradient descent based algorithms such as Back propagation applied in ANNs and significantly reduces the amount of time needed to train a Neural Network. It randomly chooses the input weights and analytically determines the output weights of SLFN. It has much better generalization performance with much faster learning speed. It requires less human interventions and can run thousands times faster than those conventional methods. It automatically determines all the network parameters analytically, which avoids trivial human intervention and makes it efficient in online and real time applications. Extreme Learning Machine has several advantages, Ease of use, Faster Learning Speed, Higher generalization performance, suitable for many nonlinear activation function and kernel functions.

Single Hidden Layer Feed-forward Neural Network (SLFN) function with L hidden nodes [34, 35] can be represented as mathematical description of SLFN incorporating both additive and RBF hidden nodes in a unified way is given as follows.

Where ai and bi are the learning parameters of hidden nodes and βi the weight connecting ith hidden node to the output node. G(ai, bi, x) is the output of ith hidden node with respect to the input x. For additive hidden node with the activation function g (x) : R → R (e.g. sigmoid and threshold), G(ai, bi, x) is given by

Where ai is the weight vector connecting the input layer to the ith hidden node and bi is the bias of the ith hidden node. ai, x denotes the inner product of vector ai and x in Rn

For N arbitrary distinct samples (xi, ti) ∈Rn × Rm Here, xi is a n×1 input vector and ti is a m×1 target vector. If an SLFN with L hidden nodes can approximate these N samples with zero error. If then implies that there exist βi, ai and bi such that

The above equation can be written as

Hβ = T

Where

With

H is the hidden layer output matrix of SLFN with ith column of H being the ith hidden node’s output with respect to inputs x1, x1, … xN

From the observed readings of LVDT-1 and LVDT-2 shown in Tables 2 and 3, the simulation study has been carried out and the following results have been obtained.

The results obtained by ELM based nonlinearity compensation of two different LVDTs are listed in Table 4 and Table 5. Two different activations functions namely sine and sigmoid are used here. The training time, testing time and Root Mean Square Error (RMSE) values are tabulated. The testing and training time are zero by using sine function for both LVDTs. There are 20 hidden nodes assigned for ELM algorithm. 50 trials have been conducted for the algorithm and the average results are shown in Tables 4 and Table 5. It can be seen from Table 4 that ELM learning algorithm spent 0 seconds CPU time obtaining the testing root mean square error (RMSE) 0:0087 with sine activation function, and 0.0156 seconds CPU time obtaining the RMSE value of 0.0088 with sigmoid activation function. Similarly from Table 5, the ELM algorithm spent 0 seconds CPU time obtaining RMSE value of 0.0265 with sine activation function and 0.0156 seconds CPU time obtaining RMSE value of 0.5513 with sigmoid function. The new ELM runs 170 times faster than the conventional BP algorithms.

**Table 4.**ELM based nonlinearity compensation of LVDT-1

**Table 5.**ELM based nonlinearity compensation of LVDT-2

## 3.3 Differential evolution algorithm based nonlinearity compensation

The Differential Evolution (DE) algorithm is a stochastic, population-based optimization algorithm introduced by Storn and Price in 1996. It is developed to optimize real parameter and real valued functions. It is a population based algorithm like genetic algorithms using the similar operators; crossover, mutation and selection. The main difference in constructing better solutions is that genetic algorithms rely on crossover while DE relies on mutation operation. This main operation is based on the differences of randomly sampled pairs of solutions in the population. The algorithm uses mutation operation as a search mechanism and selection operation to direct the search toward the prospective regions in the search space. The DE algorithm also uses a non-uniform crossover that can take child vector parameters from one parent more often than it does from others. By using the components of the existing population members to construct trial vectors, the recombination (crossover) operator efficiently shuffles information about successful combinations, enabling the search for a better solution space. An optimization task consisting of D parameters can be represented by a D-dimensional vector. In DE, a population of NP solution vectors is randomly created at the start. This population is successfully improved by applying mutation, crossover and selection operators. The main steps of the DE algorithm are given as follows:

**Fig. 6.**General evolutionary algorithm procedure

The general problem formulation is:

For an objective function where the feasible region , the minimization problem is to find x* ∈ X Such that f(x*) ≤ f(x)∀x ∈ X where f(x*) ≠ −∞

Suppose we want to optimize a function with D real parameters, we must select the size of the population N (it must be at least 4). The parameter vectors have the form:

Where, G is the generation number.

Initialization:

Define upper and lower bounds for each parameter:

Randomly select the initial parameter values uniformly on the intervals:

Mutation:

Each of the N parameter vectors undergoes mutation, recombination and selection. Mutation expands the search space.

For a given parameter vector xi,G randomly select three vectors xr1,G, xr2,G an xr3,G such that the indices i, r1, r2 and r3 are distinct.

Add the weighted difference of two of the vectors to the third

The mutation factor F is a constant from [0,2] vi,G+1 is called the donor vector

Recombination:

Recombination incorporates successful solutions from the previous generation. The trial vector ui,G+1 is developed from the elements of the target vector, xi,G and the elements of the donor vector, vi,G+1

Elements of the donor vector enter the trial vector with probability CR

randj,i ~ U[0,1], Irand is a random integer from [1,2,… …D] Irand ensures that vi,G+1 ≠ xi,G

Selection:

The target vector xi,G is compared with the trial vector vi,G+1 and the one with the lowest function value is admitted to the next generation

Mutation, recombination and selection continue until some stopping criterion is reached.

It has been observed from the above graphs (Figs. 4 and Fig. 5) that, the relation between input displacement and voltage output of LVDT are nonlinear before compensation. After compensation by DE algorithm, the nonlinearity is successfully compensated. The DE algorithm has a few control parameters: number of population NP, scaling factor F, combination coefficient K, and crossover rate CR. The problem specific parameters of the DE algorithm are the maximum generation numbers Gmax and the number of parameters designing the problem dimension D. The values of these two parameters depend on the problem to be optimized. The following results were obtained by using DE algorithm in this research work. From the observed readings of LVDT-1 and LVDT-2 shown in Table 2 and Table 3, the simulation study has been carried out and the following results have been obtained.

The DE algorithm has a few control parameters: number of population NP, scaling factor F and crossover rate CR. In the simulations, it was observed that the value of scaling factor significantly affected the performance of DE. This can be seen in Table 6 and Table 7. In order to get the best performance from the DE, the scaling factor value F and cross over value CR must be optimally tuned for each function. Of course, this is a time-consuming task. For the simplicity and flexibility, the value of F was randomly chosen between [0 2] and the value of CR was chosen between [0 1] for each generation instead of using a constant value. DE algorithm was run 1000 times for each function to achieve average results. For each run, the initial population was randomly created by means of using different seed numbers. The corresponding MSE values and average training time are calculated and listed in Table 6 and Table 7.

**Table 6.**DE based nonlinearity compensation for LVDT-1

**Table 7.**DE based nonlinearity compensation for LVDT-2

## 3.4 ANN trained by Genetic algorithm based nonlinearity compensation

To guide ANN learning, GA is employed to determine the best number of hidden layers and nodes, learning rate, momentum rate and weight optimization. With GA, it is proven that the learning becomes faster and effective. The flowchart of GANN for weight optimization is shown in Fig. 7. In the first step, weights are encoded into chromosome format and the second step is to define a fitness function for evaluating the chromosome’s performance. This function must estimate the performance of a given neural network. The function usually used is the Mean Squared Errors (MSE). The error can be transformed by using one of the two equations below as fitness value.

**Fig. 7.**Flow chart of GANN weight optimization

In GANN for optimum topology, the neural network is defined by a “genetic encoding” in which the genotype is the encoding of the different characteristics of the MLP and the phenotype is the MLP itself. Therefore, the genotype contains the parameters related to the network architecture, i.e. number of hidden layers (H), number of neurons in each hidden layer (NH), and other genes representing the Bp parameters. The most common parameters to be optimized are the learning rate (η) and the momentum (α). They are encoded as binary numbers. The parameter, which seems to best describe the goodness of a network configuration, is the number of epochs (ep) needed for the learning. The goal is to minimize the ep. The fitness function is:

The parameters of GANN training algorithm are listed in Table 8 and Table 9. After several runs the genetic search returns approximately the same result each time as the best solution despite the use of different random generated populations and a different population size reaching the lowest value of MSE with a very few number of generations are carried out. The maximum number of training cycles may be set relative to the size of the network.

**Table 8.**ANN Trained by GA based nonlinearity compensation for LVDT-1

**Table 9.**ANN Trained by GA based nonlinearity compensation for LVDT-2

The first step in developing a neural network is to create a database for its training, testing and validation. The output voltage of LVDT is used to form the other rows of input data matrix. The output matrix is the target matrix consisting of data having a linear relation with the displacement. The process of finding the weights to achieve the desired output is called training. The optimized ANN is found by considering different algorithms with varying number of hidden layers, iterations and epochs. Mean Square Error (MSE) is the average squared difference between outputs and targets. Lower values of MSE are better. Zero means no error. For ANN trained by GA, the number of iterations is assumed initially as 10 and corresponding MSE and training time are noted. Then the iterations are increased to 20 and training is repeated. The process is repeated up to 100 iterations and MSE and training time is noted.

From the observed readings of LVDT-1 and LVDT-2, the simulation study has been carried out and the following results have been obtained.

# 4. Results

A computer simulation is carried out in the MATLAB. 12 environment using an experimental dataset. The experimental data are collected from two different LVDTs having different specifications shown in Table 1. The data obtained by conducting experiments on the two LVDTs are given in Tables 2 and Table 3. The observed simulation results are shown in various figures listed below. It is observed that ELM model yields the lowest training time of zero seconds to obtain better linearity in the overall response when compared to others. At the same time DE algorithm produces the lowest MSE value of 0.000311 for F=0.4, CR=0.9 and NP=100. The average values of training time and MSE values are compared and listed in Table 10.

**Table 10.**Comparison of different methodologies for nonlinearity compensation of two different LVDTs (average best values)

# 5. Conclusion and Future Scope

This paper has proposed Extreme Learning Machine (ELM) method and two optimized ANN models to adaptively compensate for the nonlinearity offered by two different LVDTs. On comparison, ELM method based nonlinearity compensation produces a less training time and Differential Evolution (DE) algorithm based nonlinearity compensation yields better mean square error value when compared to others. Results reveal that ELM method has given best linearization approximation and compensated the nonlinearity with very less training time and lowest MSE among the proposed tools. The proposed algorithm offers a less complexity structure and simple in testing and validation procedure. This adaptive algorithm can also be applied to any transducer having a nonlinear characteristic. This hybrid technique is used to make a transducer output as more linear as possible. Further this adaptive algorithm is preferable for real time implementation also.

**Fig. 8.**ELM based nonlinearity compensation of LVDT-1 (sine function)

**Fig. 9.**ELM based nonlinearity compensation of LVDT- 1(sigmoid function)

**Fig.10.**ELM based nonlinearity compensation of LVDT- 2(sine function)

**Fig.11.**ELM based nonlinearity compensation of LVDT-2(sigmoid function)

**Fig. 12.**DE algorithm based nonlinearity compensation of LVDT-1(F=0.8, CR=0.5 & NP=100)

**Fig.13.**DE algorithm based nonlinearity compensation of LVDT-1(F=0.7, CR=0.4 & NP=100)

**Fig.14.**DE algorithm based nonlinearity compensation of LVDT-2 (F=0.8, CR=0.5 & NP=100)

**Fig.15.**GA-ANN based nonlinearity compensation of LVDT-1 (NP=10)

**Fig. 16.**GA-ANN based nonlinearity compensation of LVDT-1 (NP=20)

**Fig. 17.**GA-ANN based nonlinearity compensation of LVDT-1 (NP=100)

**Fig. 18.**GA-ANN based nonlinearity compensation of LVDT-2 (NP=10)

**Fig. 19.**GA-ANN based nonlinearity compensation of LVDT-2 (NP=30)

**Fig. 20.**GA-ANN based nonlinearity compensation of LVDT-2 (NP=50)