Co-design of H∞ jump observers for event-based measurements over networks

This work presents a strategy to minimise the network usage and the energy consumption of wireless battery-powered sensors in the observer problem over networks. The sensor nodes implement a periodic send-on-delta approach, sending new measurements when a measure deviates considerably from the previous sent one. The estimator node implements a jump observer whose gains are computed offline and depend on the combination of available new measurements. We bound the estimator performance as a function of the sending policies and then state the design procedure of the observer under fixed sending thresholds as a semidefinite programming problem. We address this problem first in a deterministic way and, to reduce conservativeness, in a stochastic one after obtaining bounds on the probabilities of having new measurements and applying robust optimisation problem over the possible probabilities using sum of squares decomposition. We relate the network usage with the sending thresholds and propose an iterative procedure for the design of those thresholds, minimising the network usage while guaranteeing a prescribed estimation performance. Simulation results and experimental analysis show the validity of the proposal and the reduction of network resources that can be achieved with the stochastic approach.


Introduction
With the increasing use of network technologies for process control, researchers focus on the reduction of the network data flow to increase flexibility under the addition of new devices (see Chen, Johansson, Olariu, Paschalidis, & Stojmenovic, 2011;Nagahara, Quevedo, & Ø stergaard, 2013). The sensor nodes can help by reducing their data transmissions with an event-based sending strategy (see Lunze & Lehmann, 2010), what furthermore helps to decrease maintenance costs if they are wireless and self-powered, as stated in Stark, Worthen, Lafortune, and Teneketzis (2002), Ploennigs, Vasyutynskyy, and Kabitzsch (2010). Some examples of using an energy-efficient sampling strategy in real-world applications are Beschi, Dormido, Sanchez, Visioli, and Yebra (2014), Beschi, Dormido, Sánchez, and Visioli (2014), Ruiz, Jiménez, Sánchez, and Dormido (2014) State estimation plays a key role in networked control systems as the state of the plant is rarely directly measured for control purposes and because the output measurements are irregularly available due to communication constraints or packet dropouts (see Chen et al., 2011;Qiu, Feng, & Gao, 2012). The approaches found in the literature to address the state estimation problem with event-based sampling can be classified depending on the sending policy, and on the communication or computational resources required on * Corresponding author. Email: ipenarro@uji.es the sensor nodes. The authors in Nguyen and Suh (2007), Suh, Nguyen, and Ro (2007) use a send-on-delta (SOD) strategy where the sensor node decides whether to send a new measurement if the actual acquired one differs more than a given threshold with respect to the last sent one. With those measurements, the estimator node implements a modified Kalman filter that uses the last acquired data and modifies the update equation to account the lack of data by means of including a virtual noise. In the work by Nguyen and Suh (2008), each node uses the integral of the difference between the last acquired measurement and the last sent one to decide whether it should send a new sample (send-on-area), while the authors in Sijs and Lazar (2012) combine SOD and time-triggered strategies in the sensor nodes. In other works like Battistelli, Benavoli, and Chisci (2012), Millán, Orihuela, Jurado, Vivas, and Rubio (2013), the authors include an state estimator in each sensor node to decide the sending of new data (output or state estimation), while in Wu, Jia, Johansson, and Shi (2013), the authors impose the sensor node to receive and process several information to decide whether it should send the measurement.
Under the motivation of reducing the computational effort of the estimator and the sensor nodes, we use a jump linear estimator that at each instant uses a precomputed gain that depends on the availability of new measurements, and the nodes implement a SOD strategy with fixed thresholds. With the aim of extending the approaches found in the literature to a wider class of disturbances, we obtain the gains that guarantee an H ∞ attenuation level based on a linear matrix inequalities (LMI) problem. With the aim of having less conservative results, we also obtain the range of probabilities of having new measurements with the SOD mechanism. In this case, we bound the H ∞ attenuation level for all the possible probabilities in the range with sum of squares (SOS) techniques (Chesi, 2010).
The use of a jump linear estimation instead of a time varying one, and the use of LMI formulation of the problem would allow to easily extend the proposal of this work to face, for instance, model uncertainties, models depending on time-varying parameters or sector-bounded nonlinearities, time-delays, packet dropouts or quantisation (see, for instance, Qiu, Feng, and Gao (2010), Qiu, Wei, and Karimi (2015), Qiu, Feng, and Yang (2009)). The LMI formulation of the problem results in the calculation of a bound on the state estimation error and the possibility of extending the observer design presented in this work to the design of inferential controllers or fault diagnosis systems.
Some works have shown that there is a trade-off between communication rate and estimation quality (Wu et al., 2013). The authors in Wang and Lemmon (2009), Dai, Lin, and Ge (2010), Gaid, Cela, and Hamam (2006), Irwin, Chen, McKernan, and Scanlon (2010) named the problem of optimising the network usage while assuring some performance measurement as co-design problem. The works by Sijs and Lazar (2012), Nguyen and Suh (2009) addressed this problem with the time-triggering condition, and Suh et al. (2007) addressed it deciding the threshold levels of sensors implementing a SOD strategy. In the last work, the authors modelled the network usage with a Gaussian probability distribution of the system outputs.
Motivated by extending the applicability of the codesign procedure to more general cases, we use the bounds on the probability of having new transmissions to measure the network usage, and to guarantee tight bounds of the achievable performance of the estimator.
We consider the value of the threshold in each sensor node as a trade-off parameter between the network usage and the estimation performance. When the thresholds are fixed, we obtain a set of constant estimator gains that maximises the estimator performance following different strategies. First, we assume that no information about the outputs is known and develop a deterministic approach that guarantees poly-quadratic stability and a bound on the root mean square (RMS) norm of the state estimation error. For the second strategy, we assume some information about the outputs distribution and develop different stochastic approaches formulated in terms of the probabilities of output transmissions, that guarantees mean square stability and a tighter bound on the RMS norm. Then, we address a codesign strategy with an iterative optimisation problem that returns both the estimator gains and the value of that leads to the lowest data transmission for a given bound on the RMS norm.
The main contributions of this work are that we propose three alternatives for observer design over SOD measurements that can tighten the bound on the estimation error depending on the knowledge on the outputs distribution, and that we use those design formulations to address the co-design problem. The use of the network can also be alleviated depending on the assumptions that can be made for the measurable outputs. The results in this paper have the advantage of explicitly showing several tuning parameters that can help tightening the bounds for the estimation error and the network requirements.
Notation : Let A and B be some matrices. A ≺ B means that matrix A − B is negative definite. Similar applies to . diag{A, B} is a block diagonal matrix with A and B on its diagonal. Let x[t] ∈ R n be a stochastic process. Expected value and probability are denoted by E{·} and P{ · }. We write

Problem statement
Consider a networked control system that updates the control action synchronously with the output measurement and the plant model where x ∈ R n is the state, u ∈ R n u is the known input vector, w ∈ R n w is the unmeasurable state disturbance vector, y ∈ R n y is the measured output, v ∈ R n y is the measurement noise, and z[t] ∈ R n z the signal of interest. Throughout this work, we assume that the control input is causally available at all times. This can be achieved when the controller and estimator are collocated, and the control action is transmitted through a reliable network (without dropouts), see Figure 1. We assume that each measurable output uses a sensor node that acquires the measurement and decides whether to send it to the estimator node. Let us assume that the sensor node i has sent a measured plant output to the estimation node through the communication network at period t = t k i and we call it y i [t k i ] = y i [t k i ] (where k i enumerates the sent data from sensor i). Then, a new measurement will be sent if the following condition holds Network State Estimator Controller and Figure 1 Send-on-deltabased networked state estimator.
In that case, the sensor sends the (k i + 1)th measurement, and y i [t] becomes y i [t k i +1 ] for future reference. We assume that there is a central state estimator node that uses the received messages from the sensor nodes to perform the estimation using the equationŝ and we model its relation with the actual state as being C i the ith row of matrix C, and where

Remark 1:
While t ∈ N refers to each time instant, t k i (with k i ∈ N) enumerates only the instant when the k i th measurement from the ith sensor is received. For instance, if we receive the k 2 th and (k 2 + 1)th measurements from sensor 2 at instants t k 2 = 8 and t k 2 +1 = 11, then, instants t k 2 + 1 = 9 or t k 2 + 2 = 10 refer to instants when no measurements from sensor 2 are received.
Let us define α i [t] as the availability factor for each sensor i, that is a binary variable that takes a value of 1 if there is a new measurement received from the sensor node i and 0 otherwise. We define the availability matrix as a diagonal one including the α i [t] factor of each sensor, i.e., We then model the available measurements of the outputs as with can take different values depending on the measurements successful transmission possibilities and they belong to a known set where η i denotes a possible combination of available measurements at each control period. We recall those combinations as sampling scenarios. Matrix η 0 denotes the scenario with unavailable measurements and q the number of different scenarios with available measurements. In the general case, any combination of obtainable sensor measurements is possible, leading to q = 2 n y − 1. The first of our goals is to define a centralised observer that uses the scarcely received distributed data and the uncertainty knowledge. We propose the observer equation (3) and define the gain observer law L[t] as what leads to a jump observer. The gains take, in general, q + 1 different values within a predefined set, i.e., The gains are computed offline once, and the centralised observer chooses the applicable gain depending on the availability of new measurements (see Smith and Seiler (2003), Dolz, Peñarrocha, and Sanchis (2014) for other jump observers applicable on networked control systems). With the estimator defined by (3) and (7), we obtain the state estimation error dynamics given bỹ As we restrict L(α[t]) to take q + 1 different values depending on the value of matrix α[t], we get a jump linear system with discrete state α[t] and with a finite number of modes.

Remark 2:
The only condition to find a stabilising observer is that the system (A, CA) is detectable. Note that if we restrict the gains to be constant (i.e., L(α[t]) = L), the dynamics of the observer is given by the constant matrix (I − LC)A, what leads to the aforementioned condition. The idea of using virtual measurements when the real ones are not available is to assure the detectability of the system at each sampling instant and thus, the stability of the observer, while the idea of adapting the gain to the sampling scenario α[t] is to avoid the propagation of the virtual noise.
The second of our goals is to jointly design the observer gains and the thresholds i that minimise the network usage while guaranteeing a predefined estimation performance. The network usage is proportional to the rate in which (2) occurs, so we achieve this goal by minimising a cost function related to the sending thresholds i . In this work, we present alternatives to bound the estimator performance and the network usage depending on i . For each of them, we calculate the minimum probability of receiving a measurement and the maximum variance of the resulting virtual noise δ [t].
We reformulate the main objective of this paper as the simultaneous design of the q + 1 gains L j and the n y thresholds i that minimise the network usage, at the same time that guarantee a given bound on the estimation error.

Observer design
We present two jump observer design approaches for SOD policy with fixed i that assure stability and H ∞ attenuation level. We propose first a deterministic strategy that does not require any assumption on the output statistics. Then, we propose different assumptions about the statisti-cal information of the output, and then develop a stochastic strategy that allows us to relax the bound on the achievable performance.

Deterministic approach
Theorem 1: Consider that observer (3) with gain (7) estimates the state of system (1) that sends its outputs with the SOD policy. If there exist matrices P j , Q j , X j (j = 0, 1, . . . , q), and positive values γ w , γ v i and γ δ i then, defining the observer gains as L j = Q −1 j X j (j = 0, . . . , q), the following conditions are fulfilled: under null disturbances, the system is asymptotically stable, and, under null initial conditions, the state estimation error is bounded by Proof 1: If (11) holds, then Q j + Q T j − P j 0 and Q j is a non-singular matrix. If P j is a positive definite matrix, (11), performing congruence transformation by matrix diag{Q j , I, I, I, I} and applying Schur complements, we obtain that (13) Consider a Lyapunov function depending on the sampling scenario as with P(α [t]) taking values on the set {P 0 , . . . , P q } depending on the value α[t] as

Multiplying expression (13) by [x[t] T , w[t] t , v[t] T , δ[t] T ] on the left, and by its transpose on the right, and assuming
, demonstrating the asymptotic stability of the observer. If we assume null initial state estimation error (x[0] = 0, V[0] = 0) and we add expression (14) from t = 0 to T, we obtain As V[T + 1] > 0, if we divide by T and take the limit when T tends to infinity, we obtain (12).

Remark 3:
As A and C are constant matrices, the only requirement to find a stabilising observer is that the pair (A, AC) is detectable.

Stochastic approach
The previous theorem leads to conservative results due to the consideration of all the possible sequences of new data reception with the same probability. For instance, it can respond satisfactorily to the situation of acquiring just a first measurement at the start-up of the observer and then working indefinitely with that unique measurement. If the disturbances and noises are not negligible, we can assume that there is a small probability of acquiring new data, and that is the key in the stochastic approach to reduce the conservativeness. The probability of having available the new data at a given sampling instant is during the last sent measurement, the inputs, disturbances and number of elapsed periods from t k i (let us call it N) as The dependency of that difference on the inputs leads us to a non-stationary probability that can change at every sampling instant, i.e.,

As the difference include the stochastic values w[t] and v[t],
we assume that the probability belongs to the set β[t] = 1 applies when the control action or the state evolution are sufficiently high to assure a new measurement transmission. β[t] = β i applies during the less excited periods (with x[t k i ] = 0 and u[t] = 0 for t ≥ t k i ) that leads to the less favourable scenario to acquire new data, when only the disturbance and noise excite the SOD mechanism. If we choose β i = 0, we face again the deterministic approach, but choosing β i > 0 implies assuming that there is at least a small probability of acquiring new data, thus reducing conservatism.
The probability of obtaining a sampling scenario η j (j = 1, . . . , q) is also non-stationary and is given by where η j, i refers to the ith diagonal entry of η j . The probability of having no measurement available at a control period is given by and the probability of sending some measurement is Remark 4: In the stochastic approach, the probabilities β i [t] (i = 1, . . . , n y ) are assumed to vary within two bounds. We will study in Section 3.3.2 how to obtain the lower bound on β i [t] (see (29) and (33)). The upper bound is the natural one β i [t] ≤ 1 that is achieved when the control action is sufficiently exciting to make the outputs cross the thresholds continuously. Therefore, each probability β i [t] is contained in the set S i defined in (17). With these bounds on β i [t], we can derive bounds on the probabilities of the sampling scenarios p j [t] (j = 0, . . . , q).
With the probabilities of the sampling scenarios (18), we can obtain the set of gains that assure an attenuation level for any probability within the bounds. In the following theorem, we omit the dependency on time of the probabilities for brevity.
Theorem 2: Consider that observer (7) estimates the state of system (1) that sends its outputs with the SOD policy. Consider that there exist matrices P = P T 0, Q j , X j (j = 0, . . . , q), and positive values γ w , γ v i and γ δ i (i = 1, . . . , n y ) such that for any {β 1 , . . . , β n y } ∈ S 1 × S 2 · · · × S n y and p j is a short notation for the following expression Then, if the observer gains are defined as L j = Q −1 j X j (j = 0, . . . , q), the system is mean square stable and, under null initial conditions, the estimation error is bounded by Proof 2: Following similar steps than those of Proof 1, inequalities (20) imply where at the next period over the possible modes of the system (α[t] = {η 0 , . . . , η q } in (9)). Assuming null disturbances, we obtain , assuring the mean square stability of the observer. Assuming initial state estimation error (x[0] = 0, V[0] = 0) and adding expression (23) from t = 0 to T, we obtain . (24) As E{V [T + 1]} > 0, dividing by T and taking the limit when T tends to infinity, one finally obtains (22).

Remark 5:
The only condition to find a solution for the previous LMI problem is that the system is detectable, as one can always choose a constant L and then use the fact that q i=0 p i = 1, what would lead to detectability condition of the pair (A, AC).
The previous problem is an infinite dimensional one that must be assured for any possible combination of the values β i within the sets S i (i = 1, . . . n y ). In order to make the problem numerically tractable, we use the SOS decomposition (Chesi, 2010;Dolz, Peñarrocha, & Sanchis, 2015;Peñarrocha, Dolz, Aparicio, & Sanchis, 2014;Peñarrocha, Dolz, and Sanchis, 2013) to define sufficient conditions to accomplish with the previous guaranteed performance. The idea is to consider the probabilities β i [t] on the previous LMI constraint as new variables of the problem and thus transform it into a polynomial matrix inequality (PMI). Then, we express β i ∈ S i with a polynomial expression of the form π i (β i ) ≥ 0. Finally, we check the positivity of the PMI for all values of β i fulfilling π i (β i ) ≥ 0 using the tools shown in the Appendix, that allow us to handle a PMI problem as an LMI one and, therefore, can be faced with standard LMI solvers. Theorem 3: Let us assume that there exist matrices P = P T 0, Q j , X j (j = 0, . . . , q), positive values γ w , γ v i and γ δ i (i = 1, . . . , n y ) and SOS polynomials s i (z, β) of fixed degree (with z a vector of proper dimensions), such that Then, conditions of Theorem 2 are fulfilled.
Proof 3: First note that each of the sets S i (i = 1, . . . , n y ) can be rewritten with its corresponding polynomial π i as S i = {β i : π (β i ) ≥ 0}. Then, applying Lemmas 4 and 5 in the Appendix, it follows that the conditions on Theorem 2 are fulfilled for any β i = β i [t].
Remark 6: In the previous theorem, variables in vectors β and z are used to construct the polynomials from which the LMI problem is derived but they are not decision variables. The determining variables are P, Q j , X j (j = 0, . . . , q), γ w , γ v i , γ δ i (i = 1, . . . , n y ) and the scalar coefficients used to construct the n y polynomials s i (z, β).
In any of the previous design approaches, we can reduce the computational cost of the observer implementation by means of imposing some restrictions on the gain matrices. We achieve the lower computational cost when the matrices are forced to be equal, thus L i = L j for all i, j = 1, . . . , q. This can be achieved imposing equality constraints over matrices Q j and matrices X j in problems (11) and (20).

Optimisation design procedure
leads to the jump observer that minimises the RMS value of the state estimation error for that assumption, where = j, k (11) for the deterministic approach and = (β) (20) for all β i ∈ S i for the stochastic one. If the RMS values of the disturbances are unavailable, they can be used as tuning parameters to achieve a desired behaviour.
The previous optimisation procedure also applies when we can only bound the disturbances and sensor noises by the norms w[t] ∞ and v i [t] ∞ , as the RMS norm is bounded by the l ∞ norm: In this case, we substitute the RMS norm of the previous optimisation procedure by its corresponding l ∞ norm.
The optimisation in both approaches needs a bound for δ i [t] 2 RMS , while in the stochastic approach, a lower probability bound β i is also needed. Furthermore, in order to proceed with the co-design problem in the next section, we need to express both bounds as explicit functions of i . We discuss now how to obtain those bounding functions for each of the approaches.

Deterministic approach design procedure
In the deterministic approach, we have the bound δ i [t] RMS < δ i [t] ∞ < i from the definition of the virtual noise signal. However, if a uniform distribution of δ i [t] is assumed, this leads to δ i [t] RMS < i / √ 3, that relaxes the optimisation problem. This assumption on the virtual noise distribution is commonly used in the literature, e.g. Suh et al. (2007).

Stochastic approach design procedure
In the stochastic approach, we must obtain relationships showing the increase of β i (in (17)) with lower values of i as well as the increase of δ i [t] RMS with higher values of i . In order to obtain those bounding relationships, we first note that (from (16)) during the less-excited periods we have the difference The smallest change in the output corresponds to t = t k i + 1, and hence, to obtain a lower bound on the probability, N = 1 is taken. Therefore, we must first obtain the probability density function for the difference and use it to obtain both the lower bound of the probability of having a new sample and the corresponding expected RMS value of the virtual noise. This probability density function is tedious to obtain as it requires recovering the density function of the sum of several signals with different distribution laws. For this reason, we present a simplification of its computation that allows us to obtain tractable expressions relating β i and δ i [t] 2 RMS with i by using two different assumptions on the outputs. In order to improve the readability of this section, we have included in Appendix A.1 the necessary but straightforward auxiliary results used to obtain the expressions.

Uniform assumption.
If we assume symmetrically bounded disturbances and noises, we can bound the difference y i [t] − y i [t k i ] in (16) (for N = 1 and in the leesexcited scenario) within [ − r i , r i ], being r i such as where C i Bw[t] ∞ (i = 1, . . . , n y ) can be computed as In this case, the RMS norm is bounded by See Lemma 1 in the Appendix for the details. If a sensor uses a threshold i > r i , it will never send a measurement during the less-excited scenario, and, therefore in that case β i = 0 and σ 2 δ i = r 2 i /6.

Gaussian assumption.
If the disturbances and noises are distributed with covariances W and V i and zero mean, in the less-excited scenario, we have that the differ- If our knowledge is the RMS norm of vector w and noises v i , we can bound σ 2 i as Assuming that the difference between two consecutive samples follow a normal distribution with zero mean and variance σ 2 i , the probability of having a new measurement is bounded by being erf (x) = 2 √ π x 0 e −t 2 dt the error function. In this case, the RMS norm is bounded by (34) See Lemma 2 in the Appendix for the details.
Remark 7: If the system outputs do not exactly follow the previous distributions, we can use the values r i and σ i in (29)-(34) as tuning parameters. In that case, we must choose sufficiently small values r i and σ i to assure that the computed probability of having new measurements is below the real one, and such that the computed variance for the virtual noise is higher than the real one. With that choice, we can at least compute a less conservative upper bound of the state estimation error than the one obtained with the deterministic approach. One of the advantages of having those bounding relationships is that allows to face the co-design problem (explained next), consisting on looking for the values of i that fulfil some estimation error and network usage. In that sense, one could know in advance some maximum values i, max below which the search is carried out (e.g., some fraction of the output sensor range). In that case, the lowest values for r i and σ i that assure that the co-design problem is sensitive to i within all its range are r i = i, max and σ i = i,max 3 (following the 3σ criterion).

Observer co-design
Once we have developed the design procedure to minimise the estimation error for a given SOD policy, we now address the minimisation of the network usage guaranteeing a desired estimation error. We first propose the cost indexes to measure the network usage.
For the deterministic approach, without statistical information of the outputs, we propose the index where g i are some free weighting factors, that can be used to account for the different range of variation of the different sensors, and 1:n y = [ 1 . . . n y ].
For the stochastic approach, we propose to use as the cost index, the probability of network usage in the lowest excitation case, that is, where β i ( i ) (i = 1, . . . , n y ) depends on i by means of (29) or (33). The actual probability of network usage will be close to this cost index only in the case of the lowest excitation, i.e., when the change of the output is minimum. When it is larger (for example, when the input u changes), the probability of network usage will be higher. However, this usage will be proportional to the cost index, and hence, minimising the cost index results in reducing the network usage for the desired estimation error in any case.
We then obtain the observer that assures a prescribed boundz rms in the estimation error, i.e. z[t] 2 RMS ≤z rms , and minimises the network usage J ( 1:n y ) by solving the following optimisation problem: ( 1:n y ) 0, The new decision variable i appears both on the cost index and in the definition of σ 2 δ i used to bound δ i [t] 2 RMS . In the deterministic approach, we express J ( 1:n y ) as (35), ( 1:n y ) = j,k as in (11), and we use the bound σ 2 δ i ( 1:n y ) = 2 i . Under the assumption of uniform distribution of δ i [t], we can relax the problem using the bound σ 2 δ i ( 1:n y ) = 2 i /3. In the stochastic approach, we express J ( 1:n y ) as (36), ( 1:n y ) = ( 1:n y ) as (20), and we express σ 2 δ i ( 1:n y ) as (30) or (34), depending on the output assumption. In this case, i appears in the bound of the probabilities β i for which (β) in (20) must be positive definite.

Remark 8:
In the previous co-design problem, one must choose the desired bound for the estimation errorz rms . This desired bound should be higher than the achievable one with standard sampling (i.e. when 1:n y = 0) in order to have a solvable problem. A reasonable option is to express the desired bound in relative terms with respect to the achievable one for 1:n y = 0, that can be obtained with (26) and (11) with δ i [t] 2 RMS = 0. In that case, the set of LMIs (11) could be simplified eliminating the last row and column matrix blocks and using just the case j = k = q where η j = I (standard sampling). If we call that performance indexz 0 , then the desired bound in (37) can be expressed asz rms = μz 0 with μ > 1.
The optimisation problem (37) is non-linear in the variables i , but reduces to an LMI problem if we fix the values of i . Some approaches to solve this non-linear optimisation are brute force with a grid approach over i , greedy algorithms and heuristic optimisation with genetic algorithms. If we use the latter one and the stochastic approach, the optimisation problem can be written as In this work, we propose a greedy algorithm as an alternative to the previous optimisation problem. A greedy algorithm is a tree search where at each step we only explore the branch that locally optimises the problem in the hope that this choice leads to a globally optimal solution (see Cormen, Leiserson, Rivest, & Stein, 2001). This kind of algorithm never comes back to previous solutions to change the search path and hence, global solutions are not guaranteed. The advantage is the lower computational cost. We propose now the following greedy algorithm to solve the previous co-design problems.
Step 4: Set i = arg min i z i .
The algorithm starts considering small values of i and β j 1, what leads to the standard periodic sampling case. Then it reduces iteratively the communication cost index while possible. At each step, it calculates the n y new sets 1:n y that lead to the new cost, changing one of the i in each set. Then, it selects the set that led to the lowest z i , i.e., the solution allowing a larger future search before the algorithm ends. It changes only one value i at each step.
The previous iterative algorithm could be run for different values of maximum allowed estimation error, leading to a set of soft functions that should express the thresholds and the gains as a function of the associated network usage. Those functions could be implemented in the estimator node, allowing the change of the parameters (thresholds and gains) when the state of the network requires it (for example increasing the thresholds to reduce the network usage to avoid congestion). If the estimator node had high computing capabilities, an alternative could be to directly compute the thresholds and gains through running the full iterative optimisation algorithm at that node when required.

Examples
In this section, we show two different examples. In the first one, we explore the achievable trade-offs between estimation error and network usage for the approaches presented in this work and compare them with other strategies existing in the literature. In the second example, we apply the observer design based on SOD measurements to control the velocity of a real DC motor using an inferential control approach. In both examples, we aim to show the performance of the proposed approaches when neither of the considered output distribution assumptions hold, i.e., when the output distribution is not uniform or normal. For brevity, we will only explore the deterministic approach and the uniform output distribution assumption one.

Simulation example
In this example, we aim to show the performance of the proposed approaches. For brevity, we will only explore the deterministic approach and the uniform output distribution assumption one. We consider the following discrete-time with w[t] RMS = 0.468. The control input is generated by a relay-based control with dead zone, such as The aim of this example is to show the performance of the co-design approach from Section 4, i.e., minimise the network usage while guaranteeing that the estimation error is lower than a prescribed one. For this purpose, the following four approaches are analysed.
C3: Stochastic approach based on uniform distribution assumption for a jump observer (see Section 3.3.2). C4: Stochastic approach based on uniform distribution assumption for constant gain.
We choose the parameters r i that define the uniform distribution assumption for each output using expression We quantify the network usage with the two cost functions presented in Section 4. For the deterministic cases C1 and C2, we use J = r 1 1 + r 2 2 (see (35)). However, when we characterise the measurement transmission by its probability (cases C3 and C4), we use J = 1 − p 0 that is the probability of having any successful data transmission in the lowest excitation case (see (36)).
We denote byz 0 the error z[t] 2 RMS resulting from the standard measurement transmission (i.e., = 0), which turns to bez 0 = 0.225. In this example, we analyse the results of the co-design procedures when fixing different values ofz 2 rms in (38). We denote by μ the ratio between the desired performance andz 0 , i.e., μ =˜z 2 rms z 0 . Figure 2 compares the thresholds i resulting from conducting the co-design procedure (see Section 4), by imposing a ratio in the range 1 ≤ μ ≤ 3. The deterministic approaches C1 and C2 are both conservative and lead to the lowest thresholds, while the stochastic approaches C3 and C4 lead to the highest thresholds, and therefore, to the lowest network usage. The thresholds in C1 and C2 remain equal, what implies that using a jump observer in the deterministic approach does not improve the co-design with a constant gain. However, when we have some knowledge about the probability of the different sampling scenarios  (stochastic approach), the use of a jump observer (case C3) enlarges i at the expense of a higher computational complexity with respect to C4. Figure 3 shows the time-average probability of having a new measurement from a given sensor β i and its virtual noise RMS norm σ 2 δ i as a function of the i presented in Figure 2 resulting from a Monte Carlo simulation. It also displays the obtained results of assuming uniform distributed outputs (see (29) and (30)) and the use of the criterion in Suh et al. (2007) The choice of r = [0.9 0.75] results in lower probabilities and higher variances than in simulation. Therefore, the stochastic design will be conservative, but will guarantee the prescribed bound on the estimation error (see Section 3.3.2). The result proposed in Suh et al. (2007) for bounding the virtual noise RMS norm assuming it as a uniform variable (i.e. where δ i [t] 2 RMS < 2 i /3) is more conservative than the one resulting from the difference of uniform output signals assumption that we propose in this work.
Simulating the estimation algorithm with the SOD procedure for the thresholds in Figure 2, we obtain the number of sent measurements and the performances indicated in Figures 4 and 5, respectively. approaches requiring less computational requirements than case C3. Figure 5 shows whether the imposed bound on the estimation error in the co-design procedure is fulfilled in simulation. The deterministic approaches C1 and C2 are far below the maximum allowed estimation error. This is due to the conservativeness introduced by the virtual noise vari- ance estimation proposed by Suh et al. (2007). The stochastic approaches C3 and C4 are also below the maximum allowed estimation error, but closer to it. The conservativeness in the stochastic design is introduced by the choice of the parameter r (see Figure 3). Note that the use of a jump observer (C1 and C3) leads to less conservative results (estimation errors closer to the allowed one) than the use of a constant gain observer (C2 and C4). This rapprochement to the allowed error is what allows the jump observer to reach higher thresholds and to reduce the network usage.
In order to show the order of magnitude of the computational complexity of the design, in this example, one LMI is solved on about 1.7 seconds in a Pentium i7-3770 computer, and a full co-design procedure could take about 70 seconds in average.
Let us now compare the aforementioned achieved performances with the co-design method using the modified Kalman filter from Suh et al. (2007) for the case μ = 1.2. As our disturbance is not Gaussian, there is not a systematic way of choosing the covariance to be used in the Kalman filter. However, in order to compare both approaches, we will test this proposal for a covariance matrix of the form W = wI, for different values of w. For a value of w = 0.1, we obtain a pair of thresholds = [0.9 0.5] that leads to a much lower network usage than our approach. However, the obtained performance after simulation with the indicated disturbance, is z[t] 2 RMS = 0.4174, thus violating the design constraint z[t] 2 RMS < 1.2 ·z 0 = 0.267. Now, we focus on the design using a value of w = 0.02, obtained after computing the covariance of the generated disturbance for simulation. Applying the Kalman filter for = [0.236 0.098] (the same values that the obtained with C3 and μ = 1.2), we obtain after simulation z[t] 2 RMS = 1.5 ·z 0 , which indicates that the performance is deteriorated with respect to the H ∞ approach.
This proves that the proposed framework guarantees robustness against a larger type of disturbances and measurement noises, which are not necessarily uncorrelated Gaussian signals.
In conclusion, this example shows that if no information about the output is known, the deterministic approach is the only option. However, making some assumption on the output distribution, we can use a stochastic approach during the co-design procedure, that reduces the resulting network usage. We have shown that if we use a jump observer, then the measurement transmissions can be reduced at the expense of more computational complexity, with respect the use of a constant gain.

Experimental application example
In this example, we show the behaviour of the proposed observer in a real application, see Figure 6. The process under study consists of a DC motor with an incremental encoder and an H-bridge driver module based on a L298. The plant has three nodes connected through a controller area netowork (CAN) (as shown in Figure 6). Two of the nodes are Texas Instruments TMS320LF2407 microcontrollers. One of them reads the encoder signal to compute the shaft speed and implements periodically the SOD mechanism deciding whether to send the measurement after comparing it with the last sent. The other node receives messages with the voltage to be applied on the motor and periodically generates accordingly a pulse width modulation signal and digital signals to apply on the H bridge. The third node connected to the network is implemented in an industrial computer with a CAN card running xPC from Mathworks. The computer reads the message containing the shaft speed, uses that measurement to observe the system state and runs a speed controller sending the resulting control action to the actuator node at each instant. The sensor, actuator and observer/controller nodes update the required values for the velocity control each 5 ms.
After performing an identification experiment, we obtain the following transfer function from voltage to angular velocity in the motor: Note that we have an additional delay of 10 ms due to the network behaviour (in addition to the inherent one sample delay in digital control  K p = 0.25 and K i = 7.78 and reference weighting b = 0.7 is applied at 5 ms period. The reference is generated as an square wave between 40 and 50 rad/s with a period of 1s. Our aim is to compare the difference of applying the PI controller with the available measurements (using repeatedly the same measurement while no new one is available), with respect of applying a PI controller that uses the estimated output with the proposed observer (inferential PI controller). A zero-order-hold discrete equivalent model is obtained from the continuous model in order to apply the methodology in this work to design the observer, giving us matrices A, B, and C, and, as we are interested in estimating the output, we fix C z = C. With respect the disturbances in the system, we consider the following issues: we assume a disturbance entering in the input channel bounded by w[t] RMS = 0.1 V and a measurement noise bounded by v[t] RMS = 0.5 rad/s due to the encoder accuracy.
We fix the desired output estimation error asz rms = 0.5 rad/s and perform the co-design procedure with the stochastic approach with uniform assumption taking r = 5 rad/s, obtaining both the gains of the observer and the threshold = 2 rad/s for the sensor node. With these considerations, we implement the SOD mechanism on the sensor node and the PI controller and observer on the industrial computer. Figure 7 shows in the left higher part the behaviour of the controlled motor when applying the PI controller directly with the received measurements (that are constant between new arrivals), while in the right part, we show the outcome when the PI uses as feedback signal the output of the observer. We show the measurement in the sensor node with thin lines, the received one in the controller node with circles, and with thick lines the estimated output. The lower figure shows the resulting control actions for both approaches. We observe that using the observer and the estimated output in the PI controller leads to a tracking error of 5%, but avoids the limit cycle produced by the PI controller with SOD measurements observed in the left figure (see Beschi, Dormido, Sanchez, and Visioli (2012) for further details in this issue). Furthermore, as an indirect effect, the number of transmissions from the sensor node is reduced.

Conclusions
In this work, we addressed an observer co-design procedure for state estimation over networks when using event-based measurements. We used a low computational cost estimation strategy that consists of using a simple SOD strategy on the sensor nodes, and a H ∞ jump linear estimator that uses a gain within a predefined set depending on the combination of available measurements at each instant. We included a virtual noise to update the state estimation when new measurements are not available. We developed a strategy based on LMI to obtain the observer gains when the thresholds of the sensor nodes are fixed. To reduce conservativeness, we derived a lower bound on the probability of receiving a measurement and an upper bound on the RMS norm of the resulting virtual noise. In this case, we addressed the design of the jump observer by using optimisation over polynomial techniques to include the uncertainty on the measurement receiving probability.
We then defined two characteriSations of the network usage and used them to derive the co-design problem, consisting on finding the thresholds of the sensor nodes and the corresponding observer gains that led to the lowest network usage allowing to reach a prescribed performance on the state estimation error.
As future research works, we will address the co-design in inferential control and fault diagnosis problems based on observers that use SOD measurements. Daniel Dolz was born in Castelló, Spain in 1988. He received his MSc degree in industrial engineering and the PhD in industrial technologies from the Universitat Jaume I of Castelló, Spain in 2011 and 2014, respectively. He also holds the MSc degree in automatic and electronic engineering from INSA Toulose, France in 2011. Currently he is a postdoctoral fellow at the Department of Industrial Systems Engineering and Design at the Universitat Jaume I of Castelló, Spain. His research interests include estimation, fault diagnosis and control over networks, and wireless sensor networks.

Julio Ariel Romero Pérez was born in Santa
Clara, Cuba in 1972. He received his automatic control engineering degree and MSc degree in automatic control from the Central University of Las Villas (Cuba) in 1995 and 1998, respectively. He earned the PhD degree in control systems and industrial computing from the Technical University of Valencia (Spain) in 2004. At present he teaches courses of processes automation in the Department of Industrial Systems Engineering and Design in Jaume I University (Spain), where he is an associate professor since 2004. His research interests are the algorithms for estimation and control in distributed systems and the development of methods for auto-tuning of industrial controllers. He is also interested in the field of engineering education, concretely in the application of active learning methodologies and the development of self-learning support systems.
Roberto Sanchis was born in Genovés, València, Spain in 1968. He received his MSc degree in electrical engineering and PhD in control engineering from the Polytechnic University of València (UPV), Spain in 1993 and 1999, respectively. He was awarded the first national prize for university graduation in 1993. During 1994 and 1995 he was a teaching assistant at the Systems and Control Engineering Department of the UPV. He has been working since 1996 at the University Jaume I of Castelló, Spain. His current position is as associate professor (tenure lecturer) at the Department of Industrial Systems Engineering and Design, where he is the leader of the control systems and automation group. His research interests include missing-data estimation, identification and control, networked control systems, tuning and auto-tuning of PID controllers, and applications (ceramic industry and waste water treatment).