State estimator for multisensor systems with irregular sampling and time-varying delays

This article addresses the state estimation in linear time-varying systems with several sensors with different availability, randomly sampled in time and whose measurements have a time-varying delay. The approach is based on a modification of the Kalman filter with the negative-time measurement update strategy, avoiding running back the full standard Kalman filter, the use of full augmented order models or the use of reorganisation techniques, leading to a lower implementation cost algorithm. The update equations are run every time a new measurement is available, independently of the time when it was taken. The approach is useful for networked control systems, systems with long delays and scarce measurements and for out-of-sequence measurements.


Introduction
In many industrial applications the control signal is updated at a fixed rate T, but the output cannot be measured at that control period due to the use of slow measuring procedures such as image processing or chemical or biochemical analysis that limit the sampling rate and introduce a (possibly varying) delay. Sometimes, the output variable is measured by various sensors, each one having a different sampling rate, time delay and reliability. The use of communication networks between sensors and controllers may also introduce time-varying delays in the received measurements and random data dropouts. As a practical example, consider an industrial ceramic atomiser whose purpose is to produce ceramic powder with a predefined humidity content. The humidity can be measured by infrared online sensors, but with a high noise and error. As a consequence, the humidity is also measured precisely with an analytical procedure that is performed scarcely in time by a human operator. The result is that there are two sets of available measurements: a noisy one obtained at regular intervals with no delay, and another one, more precise, obtained scarcely and irregularly on time with a varying delay (the time the operator spends in the analytical procedure). A similar problem occurs in biological or chemical reactors, where some concentrations can only be measured by analytical procedures that imply scarce and delayed measurements availability.
It is well known that the Kalman filter (Kalman 1960) is the tool that minimises the state prediction error variance when dealing with systems affected by Gaussian disturbances and noises. Some modifications of the original Kalman filter have been developed in order to take into account the existence of several measurable signals (states or outputs) that could be measured with redundant sensors of different precision, sampling rates and delay, leading to a multisensor system with possibly time disordered measurements. The goal of this work is to define a state estimator that takes into account the random availability of samples (non-uniform sampling and out-of-sequence measurements) and time-varying delays.
Several works have been published in order to adapt the original Kalman filter to any sampling scenario when there are no delays in the system or measurement device. For example, in Sanchis and Albertos (2002), the Kalman filter is applied to the sensorial fusion for output prediction under any sampling scenario without delay (synchronous, asynchronous, periodic and random). In Zhang, Basin, and Skliar (2004) an optimal estimator is developed for systems that receive measurements of different nature: continuous, multirate and random, but also without delay. In Li, Shah, and Xiao (2005) the Kalman filter equations are developed for the case of multirate control, where the measurements and input updates follow a given pattern. In Gupta, Chung, Hassibi, and Murray (2005) a sensor scheduling is addressed to minimise the state error covariance, assuming that only one sensor is active in every sampling instant. This scheme it is not applicable if random availability and time delays are present on the sensors or in the communication channel.
Also, different works have been developed trying to make use of the delayed measurements in the Kalman filter. In that sense, Hsiao and Pan (1996) proposes an extended order model when a reduced constant delay is present. In Larsen, Andersen, Ravn, and Poulsen (1998) the introduction of delayed measurements is dealt with by extrapolating the delayed measurement to the actual instant using the previous estimations of the Kalman filter, and calculating the optimum gain for that extrapolated measurement. For that purpose, the covariance matrix is updated at the delayed instant, i.e. at the instant where the sample was taken, and before the measurement is available. For this reason the Kalman filter is suboptimal until the measurement arrives, and the authors propose the computation of two parallel Kalman filters to avoid this fact. For this strategy to be applied, the device that computes the Kalman filter needs to know when the samples of the output occur, before the measurement is available, which in many cases is not a practical situation. The methodology presented there do not address the existence of several sensors with different delays and availability.
Reorganisation technique of delayed measurements has been used to avoid the use of an extended order model leading in Lu, Zhang, Wang, and Teo (2005) to " d Kalman filters ( " d being the maximum discrete delay) to be computed in parallel, and leading in Wang, Zhang, and Duan (2005) to an algorithm that is only updated when a set of " d measurements of every sensor is gathered, delaying the use of the available information in the Kalman filter, and leading to a complex and high computational cost algorithm.
In Basin and Martinez-Zun˜iga (2004) a continuous time optimum filter is derived for continuous time systems with multiple, but constant measurement delays. In Basin, Rodriguez-Gonzalez, and Martinez-Zun˜iga (2005) a Kalman filter is derived also for continuous time systems with constant state delay, while in Zhang, Basin, and Skliar (2007) an optimal state estimator for continuous systems with both continuous and sampled measurements with delays is proposed.
In Julier and Uhlmann (2005) the fusion of delayed measurements is studied, but the assumption is that the time delays are uncertain.
More recently, in Gopalakrishnan, Kaisare, and Narasimhan (2011), a comparative review of different strategies to incorporate delayed and infrequent measurements in Extended Kalman Filter estimators is developed. The first alternative described is the full filter recalculation every time a delayed measurement arrives. This implies to store all past states, measurement updates, gains and covariance matrices till the maximum possible delay. This strategy is optimum, but impractical for long delays due to the high computational cost. The second approach is called Alexander's method, and consists of updating the gain and covariance matrices from the instant when the measurement was taken till the measurement is received, and then to add a correction term to the state estimate. The drawback of this strategy is that, it requires the algorithm to know the instant when the measurement is taken before the measurement is available. The third alternative is the same as the previous one, but adding a second Kalman filter that is computed from the instant when the measurement is taken till it arrives, and has the same drawback as the previous, the measuring instant must be known before the measurement is received. The other algorithms studied in that work are based on the state augmentation. The first one is the fixed-lag smoothing that consists of augmenting the state with the past " d states (where " d is the maximum delay). A conventional Kalman filter is applied to the resulting " dn order model. This filter is optimum, but it implies a high computational cost for long delays. If the measurement dimension is lower than the state dimension, a lower cost alternative is to augment the state with the estimates of the measurements. The resulting model order is lower, but it may still be large for long delays. The lowest cost state augmentation approach is the sample-sate augmentation, where the state is augmented only with the value of the state at the measuring instant, but the time when the measurement was taken is needed before the measurement is received.
The most recent published work is Moayedi, Foo, and Soh (2011). In this work, a simplified state augmentation approach is proposed. The state is augmented with the past " d states. The simplification consists of only updating the current state when a measurement is available, leaving the previous state estimates unchanged. The updating gain is then of order n, and not " dn. The size of the variancecovariance matrix is " dn Â " dn, but the computational cost of the updating equation of this matrix is not so high due to the special structure of the updating gain that has only n non-zero rows. Of course, the approach leads to a suboptimal estimation, compared to the full state augmentation schemes. This approach can be applied if the delay is only known when the measurement is received, and admits out of sequence measurements.
In this work, the authors propose a different approach for the state estimation in systems with scarce measurements with time-varying delays that can be applied when the delay is only known at the instant when the measurement is received, and also admits out of sequence measurements (as can occur in networked control systems). As in Moayedi et al. (2011), the idea is to reduce the computational cost of the state augmentation schemes. In this case, a small (order n) variance covariance matrix is calculated and stored, but an augmented state of size " dn is used. The updating gain is also of order n, but the full augmented state vector is updated every time a measurement is received. The approach is based on the use of negative-time measurement update, and is first developed for general time variant systems, and then particularised for time invariant systems. The approach is suboptimal, but the loss in performance is low compared to the optimal filter, and the computational cost is much lower. The proposed algorithm can be applied in any sampling situation: multisensor sets with different sampling rate, time-varying sampling, random availability of measurements, different time-varying delays in each sensor and out-of-sequence measurements. Compared with the approach in Moayedi et al. (2011), the computational cost is similar, but the performance is better, especially if the system is unstable, as it is shown in the comparative example developed.
The layout of this article is as follows. In Section 2 the plant and the sampling scenario are described. In Section 3 the proposed estimation algorithm is described. The particularisation of the algorithm for linear time invariant systems is detailed in Section 4. Some numerical examples are analysed in Section 5, showing the validity of the proposal. Finally, in Section 6 the main conclusions are summarised.

Problem statement
The process is assumed to be a discrete-time linear time-variant system modelled by the equations where x 2 R n is the state, u 2 R n u is the known input vector, w 2 R n is the unmeasurable state disturbance vector and z 2 R n z is the targeted variables output vector.
As previously mentioned, the sensor measurements are assumed to be only available at discrete instants of time t ¼ t k , being a noisy and delayed function of the measurable outputs: where d i,k is the discrete delay (measured in number of control periods) assigned to sensor i in the k-th sampling (not all sensor measurements are available at each sampling time). The elapsed time between the last two consecutive sampling instants is denoted by and " d is an upper bound of sensor delay. The availability factor i,k defined as indicates which are the available measurements at the k-th sampling instant, and the number of available measurements is denoted by k .

State estimation algorithm
The problem to be solved is how to use the available measurements of the plant that are assumed to be scarce, with different delays, availability and with the possibility of time-disordered information. Timetagged measurements are assumed in order to solve this problem, but the time when the measurement was taken is only known after the measurement is received. This assumption is realistic in networked control systems and with slow sensors that need some time to give a measurement value from a taken sample. The novelty of the proposed approach lies in the direct utilisation of the delayed available measurements in the correction of the state estimation, without the necessity of running the Kalman filter backwards in order to introduce the measurements time ordered, and without the necessity of waiting until a complete set of sensors measurements are available. This allows to deal with all the available information and even dealing with measurements that arrive disordered in time. This problem has been solved by other authors with a state augmentation strategy, but the resulting computational cost is very high for long delays. In the presented approach, a standard size variance covariance matrix is used, leading to a much lower computational cost, at the expense of a slightly suboptimal estimation.
The Kalman filter is based on the computation of both an estimate of the system statex½t and the state estimation error covariance matrix defined as The computation is done by the propagation equation, where an initial estimation of state (x½tjt À 1) and covariance (P[tjt À 1]) is calculated using the process model, and by the updating equation, where the available measurements information is used to update the previous estimations (leading tox½t ¼x½tjt and P½t ¼P½tjt).

Propagation
The propagation equations must be calculated at every sampling instant T. The state estimation propagation equation iŝ and the state estimation error covariance matrix propagation equation is where P[tjt À 1] is defined as When there is no measurement available, there is no possible update, and the best state and covariance estimations are the previous (open loop) ones, leading tox ½t ¼x½tjt À 1, and P½t ¼ P½tjt À 1:

Update
The updating equations are only applicable when there is some measurement available at t ¼ t k . The proposed state estimation updating equation iŝ where vectorx½t k À d i,k jt k À 1 is the best estimation of the state at instant t k À d i,k with all the information gathered until instant t k À 1 and assuming null disturbances from t k À d i,k to t k , fulfillinĝ and then can be computed aŝ Vector ' i,k is the updating gain for the i-th sensor (there only exists vector ' i,k at t k if i,k ¼ 1). As a result of the analysis developed in Theorem 3.1, the state estimation error covariance matrix is proposed to be updated as where L k is the matrix formed with gains corresponding to available sensors, and M k is the k Â n m matrix formed with the rows of an identity matrix that correspond to the position of the elements m i,k available at instant t k . Therefore, vector ' i,k corresponds to the i-th row of matrix L k M k . Note that, if there is no information available from sensor j, then ' j,k ¼ 0. C d,k is the n m Â n matrix whose i-th row is given by , and " C d,k the n m Â n " d matrix whose i-th row is given by If there is no measurement of sensor i, or if there is no delay associated to the i-th sensor in the k-th measurement, then, the i-th row of both C d,k and " C d,k are null. Matrix " A k contains the product of delayed state matrices and matrix " W is the matrix of the state disturbance for all possible delays: Finally, the process output is estimated at every sampling time asẑ The next theorem defines the matrix gain that leads to the optimal estimation of the state with the state updating equation (5b).
Theorem 3.1: Assume that the disturbance w[t] in system (1) is a white noise with covariance matrix W, and that the sensors noises in (2) are white noises with variances i (i ¼ 1, . . . , n m ). Define V ¼ diagf 1 , . . . , n m g. Also assume that the measurements are received time ordered. Under these assumptions, the gain matrix L k that minimises the error variance of the with Proof: The state at instant t k can be related with the state at instant t k À d i,k as and then measurement m i,k can be written as Introducing this expression in update equation (5b) and subtracting x[t] in both sides of the resulting expression leads tõ x½t ¼ x½t Àx½t being the state estimation error, M k v k the vector containing the noises of available sensors (size k ) and " w d,k is extended disturbance vector containing the history of signal w[t] from t k À " d till t k À 1: 6 4 3 7 7 5 : Using expression (7) in the definition of covariance matrix (4) at sampling instants t ¼ t k , it can be written as where it has been considered that E{w[t k À a] Â w[t k À b] > } ¼ 0, a 6 ¼ b. As estimation errorx½t k jt k À 1 is affected by the disturbances w[t k À j] for j ¼ 1, . . . , " d, the expectation Efx½t k jt k À 1 " w > d,k g and its transpose must be calculated. In order to do this, the estimation errorx½t k jt k À 1 is written as a function of x½t k À " d jt k À 1 and all the disturbances from t k À " d to t k À 1, taking into account the assumption of time ordered measurements, leading tõ and, therefore, the expectation Efx½t k jt k À 1 " w > d,k g is given by asx½t k À " d jt k À 1 is not affected by " w d,k . With this expectation, the covariance matrix reduces to To minimise the estate estimation error variance, the trace of matrix P[t k ] must be minimised. Calculating the matrix derivative of the trace of (11) with respect to L k , it is obtained that The matrix gain that makes this expression zero is the updating gain (6) defined in the theorem. If this gain is substituted in Equation (11), it leads to the covariance updating Equation (5d) exposed in the proposed algorithm. oe Remark 1: The previous theorem is valid (and the estimate is then optimum) if the measurements are assumed to be received time ordered. Nevertheless, the resulting algorithm can also be applied if this is not the case (i.e. if an older measurement is received after a newer one). In that case, the gain defined by the theorem is suboptimal. However, the low performance loss due to this suboptimality, together with the high reduction in computational load with respect full filter recalculation or full state augmentation approaches, makes the proposed algorithm very useful in practice.
Remark 2: The proposed algorithm requires the inversion of matrices A[t]. This is only possible if the matrix is non-singular, i.e. if there are no eigenvalues on the origin for any A[t] . The system will have as many delays from input to state as eigenvalues on the origin so, in order to avoid singularity, the delay must be transmitted to the inputs that are assumed to be known and therefore can be stored in a new delayed variable u 0 [t] ¼ u[t À ], being the input to state delay. With this, matrix A[t] will be non-singular and the previous algorithm will be applicable.
Remark 3: If the process is state delayed with model equations an extended order notation must be used in order to apply the approaches in this work. The model equations must be written as This equation can be rewritten as are the matrices and vectors that would apply in the algorithm explained in this work.

Remark 4:
If there is a non-zero mean disturbance modelled by a random walk (a constant disturbance would be a particular case), an extended order model including the disturbance as a new state can be written. Let w 1 be the Gaussian disturbance with zero mean, and w 2 a random walk. Then, the system equations can be written as As w 2 is a random walk, it can written that where Dw 2 [t À 1] is a Gaussian zero-mean variable. The system equations can be written as Using the new state vector x 0 ½t ¼ Â x½t w 2 ½t Ã and the new state disturbance vector w 0 ½t ¼ Â w 1 ½t Dw 2 ½t Ã , the model is reduced to a system affected by Gaussian disturbances, where the proposed approach can be directly applied.
Remark 5: The previous representation is also valid when some of the sensors have a non-negligible dynamic. To demonstrate it, take the process difference equations where y are the signals connected to the different sensors, and the difference equations associated to the sensors Grouping both equations, it can be expressed as Taking the state of the whole plant (process plus sensors) as the one that joins the process state plus the sensors state x½t ¼ Â x p ½t x s ½t Ã , the representation (1) is reached.

Implementation of the estimator
In this section an alternative way of implementing the proposed estimator is addressed. The idea is to use an extended state in order to avoid the multiple matrix inversions that are needed in the algorithm exposed in the previous section.
Let us define the extended state vector and system matrices asX ½tjt À 1 ¼x ½tjt À 1 x½t À 1jt À 1 . . . Making use of these matrices, the resulting algorithm can be implemented with the propagation equationŝ " plus the updating equations and S k is computed by Matrix À k is defined as A½t k À j Þ À1 2 6 6 6 6 4 3 7 7 7 7 5 : that can be computed, to avoid the matrix inversions, as The delayed state estimation needed in the update equation can be directly extracted from the extended state aŝ Note that with the implementation of this algorithm, the computational cost due to matrix inversions is drastically reduced, but at the expense of an increment of the memory needed in the computing system.

State estimator for LTI systems
In the case of dealing with LTI systems, the matrices that define the behaviour of the system (1) reduce to the constant ones: . . , n m Þ: Using this equation, the previous algorithm is reduced to the propagation equations: plus the update equations that are applied when any sensor is available at t ¼ t k , ' i,k being the i-th column of matrix L k M k and the matrices C d,k and " C d,k (of size n m Â n " d) defined by its rows Note that these rows are null if there is no delay in the sensor measurement. The delayed state estimation is computed aŝ S k is the matrix given by

Extended state implementation
In order to avoid matrix inversion in the implementation of the state estimator, an extended state-based implementation is now addressed. The idea is to store the delayed state estimations and to update them when a new set of measurements arrives, i.e. to avoid calculatingx½t k À d i,k jt k À 1 with (26). On the other hand, there should be enough memory to allocate A Àj ( j ¼ 1, . . . , " d). With this modification, the resulting algorithm is implemented with the propagation equationŝ plus the updating equations X½t k ¼X½t k jt k À 1 P½t whereX½t À 1 ¼X½t À 1jt À 1, and X½tjt À 1 ¼x ½tjt À 1 x½t À 1jt À 1 . . . A, B and À being constant matrices of sizes nð " d þ 1Þ Â nð " d þ 1Þ, nð " d þ 1Þ Â n u and nð " d þ 1Þ Â n, respectively. With this notation, the delayed state estimation is obtained aŝ where no matrix inversion is needed to compute (26). The propagation equation of the state is equivalent to the standard propagation equation of order n plus storing the old past state estimates (as in Moayedi et al. (2011)). As a difference with the approach in Moayedi et al. (2011), the size of the variance-covariance matrix is n Â n, instead of " dn Â " dn; therefore, the computing cost associated with the covariance matrix update is much lower. On the other hand, the augmented state update equation has a higher computational cost than the one in Moayedi et al. (2011), because here, all the past states are updated every time a measurement is received, while in Moayedi et al. (2011) the past states are simply stored, but only the current state is updated. Taking into account the sizes of all the matrices in the algorithm, it is easy to calculate the approximate number of products that must be computed at every step. The resulting number of products is " d ðn 2 þ n 2 n m þ n 2 m nÞ þ 3n 3 þ 2n 2 n m þ 4n 2 m n þ n 2 þ n 2 m þ 2n m n that represents the same computer cost order (oð " d Þ) as the algorithm in Moayedi et al. 2011). The behaviour of both algorithms is very similar if the system is stable. However, for unstable systems, the performance of the proposed algorithm is much better than the algorithm described in Moayedi et al. (2011).
The next examples show the comparative behaviour of both approaches.

Example 1
To evaluate the performance of the proposed algorithm and to compare it with the proposal of Moayedi et al. (2011), the state estimation of three different SISO systems is addressed. Systems G 1 and G 2 are both stable, but with very different damping rate: 0.7 for G 1 and 0.05 for G 2 . On the other hand, system G 3 has an integrator, and thus it is unstable.
For the simulation study the ZOH discrete-time models are obtained with sampling-time of 0.1 s, resulting in the following state-space models with C ¼ [0 1] in all cases. A measurement noise with variance of V ¼ 0.08 and a process noise with covariance matrix W ¼ 0:08 0:01 0:01 0:08 ! are assumed. With respect the sampling scenario, an average of 50% of packet dropout is considered, while a random delay between 0 and 5 samples is assumed for the available measurements. The previous measurement pattern, used in the simulations, is depicted in Figure 1. It must be pointed out that this sampling scenario includes the reception of time disordered measurements, and, therefore the proposed approach is suboptimal, as explained in Remark 1. The state estimation error variances are presented in Table 1. These values are first compared with the full augmented state algorithm described in Gopalakrishnan et al. (2011), and second with the algorithm proposed in Moayedi et al. (2011). As can be noted, for system G 1 the results are very similar. For system G 2 the behaviour of the proposed algorithm is also similar to the full augmented state one, however the performance of the algorithm described in Moayedi et al. (2011) is clearly worse. For system G 3 the performance of the proposed algorithm is worse than the augmented state optimal, but it is still reasonable, while the algorithm in Moayedi et al. (2011) results in a very poor performance, as shown in Figure 2. This wrong behaviour for unstable systems is due to the fact that the older states of the extended state vector are not updated when a measurement is received. If the system is unstable, the estimation error of those older states increases rapidly, and hence the corrections made with those wrong state estimates do not improve the estimation. Only when a measurement without delay is received, the estimation is improved significantly, as seen in Figure 2, where the estimation error (especially in state 2) only decreases in the instants with zero measurement delay.

Example 2
The aim of this example is to illustrate the application of the proposed algorithm to systems with the features addressed in Remarks 2-5. Consider the following system: where A p1 , A p2 , B p and C p are known matrices and w 2 [t] is a white noise signal with known covariance matrix W 2 . It is assumed that w 1 [t] is a slow disturbance affecting the input that fulfils the equation Dw 1 [t] being a zero mean white noise signal with known covariance matrix W 1 . Furthermore, the input signal u is stored such that a new variable can be defined as in order to avoid extending the state to include input delays.
The output y is measured through a sensor modelled by the dynamic equation where A s , B s , c s,1,k and d y,1,k are known matrices, and w s [t] and v 1,k are white noise signals with known covariance matrices W s and V.
Taking into account the Remarks 2-4, an extended order model can be defined as Furthermore, according to Remark 5, Equations (32) and (30) can be gathered and  Values below zero represent time instants when packets dropout occur. : ð34Þ Figure 3 shows the results of an implementation of the proposed algorithm with the previous assumptions compared with the full augmented state algorithm described in Gopalakrishnan et al. (2011) that assumes white noise disturbances. The system is defined by the following matrices: The sampling scenario is the same as in Example 1, depicted in Figure 1. It can be noted that the proposed approach leads to an unbiased estimation, while a considerable estimation error is observed for the augmented state algorithm due to the presence of the slow disturbance.

Conclusions
This article presents a state estimation algorithm for systems with scarce, irregular measurements, with time varying delays, that can be applied when the delay is only known at the instant when the measurement is received, and also admits out of sequence measurements (as can occur in networked control systems). The algorithm is based on a modification of the discrete Kalman filter, using negative-time measurement update, and is first developed for general time variant systems, and then particularised for time invariant systems. The computational cost of the proposed estimator is lower than the full state augmentation Kalman filter schemes. This is mainly because in this case a small (order n) covariance matrix is calculated and stored, but an augmented state of size " dn is used. The updating gain is also of order n, but the full augmented state vector is updated every time a measurement is received. The approach is suboptimal, but the loss of performance is low compared to the optimal filter, and the computational cost is much lower. The proposed algorithm can be applied in any sampling situation: multisensor sets with different sampling rate, timevarying sampling, random availability of measurements and different time-varying delays in each sensor and out-of-sequence measurements. Compared with the recently proposed approach in Moayedi et al. (2011), the computational cost is similar, but the performance is better, especially if the system is unstable, as it has been shown in the comparative examples developed.
An interesting future research line could be to solve the inverse problem, i.e. to decide when to measure the different outputs to reach a given performance, minimising the resources (e.g. number of measurements or network bandwidth). This could be formulated as an off line scheduling problem, or as an on line algorithm that decides which are the measurements that should be taken at each instant. The extension of the algorithm to non-linear systems can also be a future research line.