Recover Continuous time Linear System From Discrete time
CONTROLLABILITY, OBSERVABILITY, AND DISTANCE TO UNCONTROLLABILITY
BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004
6.3 OBSERVABILITY: DEFINITIONS AND BASIC RESULTS
In this section we state definitions and basic results of observability. The results will not be proved here because they can be easily proved by duality of the results on controllability proved in the previous section.
6.3.1 Observability of a Continuous-Time System
The concept of observability is dual to the concept of controllability.
Definition 6.3.1.
The continuous-time system (6.2.1) is said to be observable if there exists t 1 > 0 such that the initial state x(0) can be uniquely determined from the knowledge of u(t) and y(t) for all t, 0 ⩽ t ⩽ t 1.
Remark
- •
-
The observability of the system (6.2.1) is often referred to as the observability of the pair (A, C).
Analogous to the case of controllability, we state the following criteria of observability.
Theorem 6.3.1. Criteria for Continuous-Time Observability.
The following are equivalent:
- (i)
-
The system (6.2.1) is observable.
- (ii)
-
The observability matrix
has full rank n.
- (iii)
-
The matrix
is nonsingular for any t 1 > 0.
- (iv)
-
The matrix
has rank n for every eigenvalue λ of A.
- (v)
-
None of the eigenvectors of A is orthogonal to the rows of C, that is, if (λ, y) is an eigenpair of A, then Cy ≠ 0.
- (vi)
-
There exists a matrix L such that the eigenvalues of A + LC can be assigned arbitrarily, provided that the complex eigenvalues occur in conjugate pairs.
We only prove (iii) ⇔ (i) and leave the others as an exercise ( Exercise 6.6 ).
Theorem 6.3.2.
The pair (A, C) is observable if and only if the matrix W O is nonsingular for any t 1 > 0.
Proof. First suppose that the matrix W O is nonsingular. Since y(t) and u(t) are known, we can assume, without any loss of generality, that u(t) = 0 for every t. Thus,
This gives
Thus, x(0) is uniquely determined and is given by .
Conversely, if W O is singular, then there exists a nonzero vector z such that W O z = 0, which in turn implies that Ce Aτ z = 0. So, y(τ) = Ce Aτ(x(0) + z) = Ce Aτ x(0).
Thus, x(0) cannot be determined uniquely, implying that (A, C) is not observable.
Component observability. As in the case of controllability, we can also speak of component observability when all the states are not observable by certain output. The rank of the observability matrix
where c j , the jth row of the output matrix C, determines the number of states that are observable by the output y j (t).
6.3.2 Observability of a Discrete-Time System
Definition 6.3.2.
The discrete-time system (6.2.6) is said to be observable if there exists an index N such that the initial state x 0 can be completely determined from the knowledge of inputs u 0, u 1,…, u N-1, and the outputs y 0, y 1,…, y N .
The criteria of observability in the discrete-time case are the same as in the continuous-time case, and therefore, will not be repeated here.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122035906500100
REVIEWS
Wing-Kuen Ling , in Nonlinear Digital Filters, 2007
Controllability and observability
For a continuous time system, assume that x(0) = 0. ∀x 1, if ∃t 1 > 0 and u(t) such that x(t 1) = x 1, then the continuous time system is said to be reachable. Similarly, for a discrete time system, assume that x(0) = 0. ∀x 1, if ∃n 1 > 0 and u(n) such that x(n 1) = x 1, then the discrete time system is said to be reachable. For a continuous time system, if ∀x 0, x 1, ∃t 1 > 0 and u(t) such that x(0) = x 0 and x(t 1) = x 1, then the continuous time system is said to be controllable. Similarly, for a discrete time system, if ∀x 0, x 1, ∃n 1 > 0 and u(n) such that x(0) = x 0 and x(n 1 = x 1, then the discrete time system is said to be controllable. For LTI systems, the set of reachable state is R(|BAB… AnB|), where R(A) is defined as the range of A, that is R(A) = {y : y = A x}. Also, the LTI systems are controllable if and only if R(A) = R n Or in other words, rank(|BAB… An B |) = n.
For a continuous time system, ∀x 1, if ∃t 1 > 0 and u(t) such that x 1 can be determined from y(t) for t > t 1, then the continuous time system is said to be observable. Similarly, for a discrete time system, ∀x 1, if ∃n 1 > 0 and u(n) such that x 1 can be determined from y(n) for n > n 1, then the discrete time system is said to be observable. For LTI systems, the set of unobservable state is
where N(A) is defined as the null space of the kernel of A, that is N(A) ≡ {x: Ax = 0}. Also, the LTI systems are observable if and only if
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123725363500028
STABILITY, INERTIA, AND ROBUST STABILITY
BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004
7.10 SUMMARY AND REVIEW
The stability of the system:
or that of
is essentially governed by the eigenvalues of the matrix A.
Mathematical Criteria of Stability
The continuous-time system is asymptotically stable if and only if the eigenvalues of A are all in the left half plane ( Theorem 7.2.1 ). Similarly, the discrete-time system x(k + 1) = Ax(k) is asymptotically stable if and only if all the eigenvalues of A are inside the unit circle. ( Theorem 7.3.1 ). Various Lyapunov stability theorems (Theorems 7.2.3–7.2.9 , and Theorem 7.3.2 ) have been stated and proved.
The Inertia of a Matrix
Two important inertia theorems ( Theorems 7.4.1 and 7.4.2 ) and the classical Sylvester Law of Inertia have been stated. These inertia theorems generalize the Lyapunov stability results.
Methods for Determining Stability and Inertia
The Characteristic Polynomial Approach and the Matrix Equation Approach are two classical approaches for determining the stability of a system and the inertia of a matrix. Both these approaches have some computational drawbacks.
The zeros of a polynomial may be extremely sensitive to small perturbations. Furthermore, the numerical methods to compute the characteristic polynomial of a matrix are usually unstable.
The most numerically effective method (The Schur method, described in Chapter 8), for solving a Lyapunov matrix equation is based on reduction of the matrix A to RSF, and the RSF displays the eigenvalues of A or the eigenvalues can be trivially computed out of this form.
Thus, the characteristic equation approach is not numerically viable and the matrix equation approach for stability and inertia is counterproductive.
Hence, the most numerically effective approach for stability and inertia is the eigenvalue approach: compute all the eigenvalues of A.
By explicitly computing the eigenvalues, one, however, gets much more than what is needed for stability and inertia. Furthermore, since the eigenvalues of a matrix can be very sensitive to small perturbations, determining the inertia and stability by computing explicitly the eigenvalues can be misleading.
An implicit matrix equation approach ( Algorithm 7.5.1 ), which does not require computation of eigenvalues nor explicit solution of any matrix equation has been described. Algorithm 7.5.1 is about three times faster than the eigenvalue method (According to the flop-count).
Distance to an Unstable System
Given a stable matrix A, the quantity β(A) defined by
where U is the set of n × n matrices with at least one eigenvalue on the imaginary axis, is the distance to the set of unstable matrices.
A bisection algorithm ( Algorithm 7.6.1 ) based on knowing if a certain Hamiltonian matrix (the matrix (7.6.3)) has a purely imaginary eigenvalue, is described. The algorithm is based on Theorem 7.6.1 , which displays a relationship between a spectral property of the Hamiltonian matrix and the quantity β(A).
The discrete-analog of β(A) is defined to be
An analog of Theorem 7.6.1 ( Theorem 7.6.3 ) is stated and a bisection algorithm ( Algorithm 7.6.2 ) based on this theorem is described.
Robust Stability
Given a stable matrix A, one naturally wonders if the matrix A + E remains stable, where E is a certain perturbed matrix. Two bounds for E guaranteeing the stability of the perturbed matrix (A + E) are given, in terms of solutions of certain Lyapunov equations ( Theorems 7.7.1 and 7.7.2 ).
Stability Radius
Section 7.8 deals with the structured stability radius. If the perturbations are of the form BΔC, where Δ is an unknown perturbation matrix, then it is of interest to know the size of smallest Δ (measured using 2-norm) that will destabilize the perturbed matrix A + BΔC. In this context, the concept of stability radius is introduced, and formulas both for the complex stability radius ( Theorem 7.8.1 ) and the real stability radius are stated.
H2-Norm
The H 2-norm of a stable transfer, transfer function measures the steady-state covariance of the output response y = Gv to the white noise inputs v. An algorithm ( Algorithm 7.2.1 ) for computing the H 2-norm, based on computing the controllability or observability Grammian via Lyapunov equations is given.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122035906500112
Continuous-Time Systems
Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019
2.3.2 Time-Invariance
A continuous-time system is time-invariant if whenever for an input , with a corresponding output , the output corresponding to a shifted input (delayed or advanced) is the original output equally shifted in time, (delayed or advanced). Thus
(2.9)
That is, the system does not age—its parameters are constant.
A system that satisfies both the linearity and the time-invariance is called Linear Time-Invariant or LTI.
Remarks
- 1.
-
It should be clear that linearity and time-invariance are independent of each other. Thus, one can have linear time-varying, or non-linear time-invariant systems.
- 2.
-
Although most actual systems are, according to the above definitions, non-linear and time-varying, linear models are used to approximate (around an operating point) the non-linear behavior, and time-invariant models are used to approximate (in short segments) the system's time-varying behavior. For instance, in speech synthesis the vocal system is typically modeled as a linear time-invariant system for intervals of about 20 ms attempting to approximate the continuous variation in shape in the different parts of the vocal system (mouth, cheeks, nose, etc.). A better model for such a system is clearly a linear time-varying model.
- 3.
-
In many cases time invariance can be determined by identifying—if possible—the input, the output and letting the rest represent the parameters of the system. If these parameters change with time, the system is time-varying. For instance, if the input and the output of a system are related by the equation , the parameter of the system is the function , and if it is not constant, the system is time-varying. Thus, the system , where A is a constant, is time invariant as can be easily verified. But the AM modulation system given by is time-varying as the function is not constant.
The Beginnings of Radio
The names of Nikola Tesla (1856–1943) and Reginald Fessenden (1866–1932) are linked to the invention of amplitude modulation and radio [65,78,7]. Radio was initially called "wireless telegraphy" and then "wireless" [26].
Tesla was a mechanical as well as an electrical engineer, but mostly an inventor. He has been credited with significant contributions to electricity and magnetism in the late 19th and early 20th century. His work is the basis of the alternating current (AC) power system, and the induction motor. His work on wireless communications using the "Tesla coils" was capable of transmitting and receiving radio signals. Although Tesla submitted a patent application for his basic radio before Guglielmo Marconi, it was Marconi who was initially given the patent for the invention of radio (1904). The Supreme Court in 1943 reversed the decision in favor of Tesla [6].
Fessenden has been called the "father of radio broadcasting." His early work on radio led to demonstrations on December 1906, of the capability of point-to-point wireless telephony, and what appears to be the first audio radio broadcasts of entertainment and music ever made to an audience (in this case shipboard radio operators in the Atlantic). Fessenden was a professor of electrical engineering at Purdue University and the first chairman of the electrical engineering department of the University of Pittsburgh in 1893.
AM Communication System
Amplitude modulation (AM) communication systems arose from the need to send an acoustic signal, a "message", over the airwaves using a reasonably sized antenna to radiate it. The size of the antenna depends inversely on the highest frequency present in the message, and voice and music have relatively low frequencies. A voice signal typically has frequencies in the range of 100 Hz to about 5 kHz (the frequencies needed to make a telephone conversation intelligible) while music typically displays frequencies up to about 22 kHz. The transmission of such signals with a practical antenna is impossible. To make the transmission possible, modulation was introduced, i.e., multiplying the message by a periodic signal such as a cosine , the carrier, with a frequency much larger than those in the acoustic signal. Amplitude modulation provided the larger frequencies needed to reduce the size of the antenna. Thus is the signal to be transmitted, and the effect of this multiplication is to change the frequency content of the input.
The AM system is clearly linear, but time-varying. Indeed, if the input is , the message delayed τ seconds, the output would be which is not as a time-invariant system would give. Fig. 2.4 illustrates the AM transmitter and receiver. The carrier continuously changes independent of the input and as such the system is time-varying.
Figure 2.4. AM modulation: transmitter and receiver.
FM Communication System
In comparison with an AM system, a frequency modulation (FM) system is non-linear and time-varying. An FM modulated signal is given by
where is the input message.
To show the FM system is non-linear, assume we scale the message to , for some constant γ, the corresponding output is given by
which is not the previous output scaled, i.e., , thus FM is a non-linear system. Likewise, if the message is delayed or advanced, the output will not be equally delayed or advanced—thus the FM system is not time-invariant.
Vocal System
A remarkable system that we all have is the vocal system (see Fig. 2.5). The air pushed out from the lungs in this system is directed by the trachea through the vocal cords making them vibrate and create resonances similar to those from a musical wind instrument. The generated sounds are then muffled by the mouth and the nasal cavities resulting in an acoustic signal carrying a message. Given the length of the typical vocal system, on average for adult males about 17 cm and 14 cm for adult females, it is modeled as a distributed system and represented by partial differential equations. Due to the complexity of this model, it is the speech signal along with the understanding of the speech production that is used to obtain models of the vocal system. Speech processing is one of the most fascinating areas of electrical engineering.
Figure 2.5. Vocal system: principal organs of articulation. Model for speech production.
A typical linear time-invariant model for speech production considers segments of speech of about 20 ms, and for each a low-order LTI system is developed. The input is either a periodic pulse for the generation of voiced sounds (e.g., vowels) or a noise-like signal for unvoiced sounds (e.g., the /sh/ sound). Processing these inputs gives speech-like signals. A linear time-varying model would take into consideration the variations of the vocal system with time and it would be thus more appropriate.
Example 2.4
Consider constant linear capacitors and inductors, represented by ordinary differential equations
with initial conditions and , under what conditions are these time-invariant systems?
Solution: Given the duality of the capacitor and the inductor, we only need to consider one of these. Solving the ordinary differential equation for the capacitor we obtain (the initial voltage ):
Suppose then we delay the input current by λ s. The corresponding output is given by
(2.10)
by changing the integration variable to . For the above equation to equal the voltage across the capacitor delayed λ s, given by
we need that for , so that the first integral in the right expression in Equation (2.10) is zero. Thus, the system is time-invariant if the input current for . Putting this together with the condition on the initial voltage , we can say that for the capacitor to be a linear and time-invariant system it should not be initially energized—i.e., the input for and no initial voltage across the capacitor, . Similarly, using duality, for the inductor to be linear and time-invariant, the input voltage across the inductor for (to guarantee time-invariance) and the initial current in the inductor (to guarantee linearity). □
Example 2.5
Consider the RLC circuit in Fig. 2.6 consisting of a series connection of a resistor R, an inductor L and a capacitor C. The switch has been open for a very long time and it is closed at , so that there is no initial energy stored in either the inductor or the capacitor and the voltage applied to the elements is zero for . Obtain an equation connecting the input and output the current .
Figure 2.6. RLC circuit with a switch for setting the initial conditions to zero.
Solution: Because of the presence of the capacitor and the inductor, both capable of storing energy, this circuit is represented by a second order ordinary differential equation with constant coefficients. According to Kirchhoff's voltage law:
To get rid of the integral we find the derivative of with respect to t:
a second order ordinary differential equation, with input the voltage source , and output the current . Because the circuit is not energized for , the circuit represented by the above ordinary differential equation is linear and time-invariant. □
Remark
An RLC circuit is represented by an ordinary differential equation of order equal to the number of independent inductors and capacitors in the circuit. If two or more capacitors (two or more inductors) are connected in parallel (in series), sharing the same initial voltage across them (same initial current) we can convert them into an equivalent capacitor with the same initial voltage (equivalent inductor with the same current).
A system represented by a linear ordinary differential equation, of any order N, having constant coefficients, and with input and output :
(2.11)
is linear time-invariant if the system is not initially energized (i.e., the initial conditions are zero, and the input is zero for ). If one or more of the coefficients are functions of time, the system is time-varying.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128142042000120
Techniques in Discrete and Continuous Robust Systems
Ahmad A. Mohammad , J.A. De Abreu-Garcia , in Control and Dynamic Systems, 1996
3 Theoretical Aspects of the LBCT
This is best summarized in the following theorem:
Theorem: For any given stable, minimal balanced DTS (Adb, Bdb, Cdb, Dd, Σ, T), with distinct second order modes, there exists a unique, minimal, stable, and balanced CTS model (Acb, Bcb, Ccb, Dc = Dd, Σ), where
and Acb is the unique solution of the CT Lyapunov equations
Moreover, the CTS approximation of the DTS model has the following properties:
- 1)
-
The CTS model is minimal and stable if the DTS model is minimal and stable.
- 2)
-
There exists a solution for Eqs.(84) – (85), for a fixed choice of T, B, and C.
- 3)
-
The solution of 2) is unique.
- 4)
-
The |H|2 norm of the DTS is equal to the |H|2 norm of the CTS (notice that we only deal with the strictly proper part).
- 5)
-
The initial error in the step response is identically zero.
Proof:
- (1)
-
Guaranteed by Lyapunov equations.
- (2, 3)
-
Proved in section 2.
- (4)
-
A direct consequence of matching the Hankel Singular values of the DT and CT systems.
- (5)
-
First, it is noted that both systems share the same feedthrough term D. Moreover, from the initial value theorem, the initial response of the strictly proper part is identically zero.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0090526796800673
State–space representation
M. Sami Fadali , Antonio Visioli , in Digital Control Engineering (Third Edition), 2020
7.9 Similarity transformation
Any given linear system has an infinite number of valid state–space representations. Each representation corresponds to a different set of basis vectors in state–space. Some representations are preferred because they reveal certain system properties, whereas others may be convenient for specific design tasks. This section considers transformation from one representation to another.
Given a state vector x(k) with state–space representation of the form of Eq. (7.59), we define a new state vector z(k)
(7.99)
where the transformation matrix T r is assumed invertible. Substituting for x(k) from Eq. (7.99) in the state Eq. (7.59) and the output Eq. (7.67) gives
(7.100)
Premultiplying the state equation by gives
(7.101)
Hence, we have the state–space quadruple for the state vector z(k)
(7.102)
Clearly, the quadruple for the state vector x(k) can be obtained from the quadruple of z(k) using the inverse of the transformation Tr (i.e., the matrix ).
For the continuous-time system of Eq. (7.3), a state vector z(t) can be defined as
(7.103)
and substitution in Eq. (7.3) yields Eq. (7.102). Thus, a discussion of similarity transformation is identical for continuous-time and discrete-time systems. We therefore drop the subscript d in the sequel.
Example 7.20
Consider the point mass m driven by a force f of Example 7.1, and determine the two state–space equations when m = 1 and
- 1.
-
the displacement and the velocity are the state variables and
- 2.
-
the displacement and the sum of displacement and velocity are the state variables.
Solution
1. From the first principle, as shown in Example 7.1, we have that the equation of motion is
and in case 1, by considering x 1 as the displacement and x 2 as the velocity, we can write
Thus, we obtain the realization
- 2.
-
We express the new state variables, z 1 and z 2, in terms of the state variable of part 1 as
This yields the inverse of the transformation matrix
Using Eq. (7.102) gives the realization
To obtain the transformation to diagonal form, we recall the expression
(7.104)
where V is the modal matrix of eigenvectors of A and Λ = diag{λ 1, λ 2, …, λ n } is the matrix of eigenvalues of A. Thus, for A = Λ in Eq. (7.102), we use the modal matrix of A as the transformation matrix. The form thus obtained is not necessarily the same as the diagonal form obtained in Section 8.5.3 from the transfer function using partial fraction expansion. Even though all diagonal forms share the same state matrix, their input and output matrices may be different.
Example 7.21
Obtain the diagonal form for the state–space equations
Solution
The eig command of MATLAB yields the eigenvalues and the modal matrix whose columns have unity norm
The state matrix is in companion form and the modal matrix is also known to be the Van der Monde matrix (column norms need not be unity):
The MATLAB command for similarity transformation is ss2ss. It requires the inverse of the similarity transformation matrix Tr . Two MATLAB commands transform to diagonal form. The first requires the modal matrix of eigenvectors V and the system to be transformed
The second uses similarity transformation but does not require the transformation matrix
The two commands yield
For complex conjugate eigenvalues, the command canon yields a real realization but its state matrix is not in diagonal form, whereas ss2ss will yield a diagonal but complex matrix.
7.9.1 Invariance of transfer functions and characteristic equations
Similar systems can be viewed as different representations of the same systems. This is justified by the following theorem.
Theorem 7.1
Similar systems have identical transfer functions and characteristic polynomials.
Proof
Consider the characteristic polynomials of similar realizations (A, B, C, D) and (A 1, B 1, C 1, D):
where we used the identity .
The transfer function matrix is
where we used the identity (A B C)−1 = C – 1 B −1 A −1.
Clearly, not all systems with the same transfer function are similar, due to the possibility of pole-zero cancellation. Systems that give rise to the same transfer function are said to be equivalent.
Example 7.22
Show that the following system is equivalent to the system shown in Example 7.19 (2).
Solution
The transfer function of the system is
which is identical to the reduced transfer function of Example 7.19 (2).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128144336000077
State estimation strategy for continuous-time systems with time-varying delay via a novel L-K functional
Feisheng Yang , ... Peipei Kang , in Control Strategy for Time-Delay Systems, 2021
5.5.2 3rd order example
Consider the 3D continuous-time system in (5.1) with time-varying delay satisfying (5.2) and
Given different performance indices γ, the maximum allowable delay upper bounds are given in Table 5.3, and the performance indices for given different time delay upper bounds τ are calculated in Table 5.4.
Table 5.3. Maximum allowable delay upper bounds τ for different γ.
τ | γ | ||||
---|---|---|---|---|---|
Method | 10 | 20 | 30 | 40 | 50 |
Theorem 5.2 | 0.31 | 0.33 | 0.34 | 0.37 | 0.39 |
Table 5.4. Minimum H ∞ performance index γ min for different τ.
γ min | τ | ||||
---|---|---|---|---|---|
Method | 0.3 | 0.5 | 0.7 | 0.8 | 1.0 |
Theorem 5.2 | 7 | 48 | 58 | 62 | 85 |
For the simulation verification, we design a suitable state estimator for the system under study. Choosing the time-varying delay as shown in Fig. 5.6 and applying Theorem 5.2, we obtain the following estimator gain matrix:
Figure 5.6. The curve for time-varying delay .
By choosing the initial condition as Fig. 5.7 depicts the responses of the estimation errors associated with the above estimator. Clearly, the designed state estimator produces pretty good estimation on the system states. The simulation results further confirm the effectiveness of Theorem 5.2 to guaranteed performance state estimation of the continuous-time system with time-varying delay.
Figure 5.7. The state estimation errors in the 3rd order example.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780323853477000109
12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering
Muxin Sun , ... Efstratios N. Pistikopoulos , in Computer Aided Chemical Engineering, 2015
5.2 Comparison with multiparametric programming
By discretising the continuous time system, the optimization problem can be formulated as conventional multiparametric programming problem (Pistikopoulos, 2012). Critical regions can be obtained by solving mpQP, where input is assumed constant for each time interval, are shown in Figure 4. Critical regions for mpQP are approximations of the regions for mpDO, the smaller the sample time, the more closely the critical regions approach to the exact solutions in Figure 2. A special region in both cases is the middle one, namely CR08 (a) or CR15 (b), which represents the case that no constraint is active. The others are all with the first couple of time intervals containing active constrains, with same solution structure as CR02 or CR03 in Figure 2. This unifies the definition of critical regions for different branches of multiparametric programing and also provides a way to explore the solution structures of mpDO by solving approximate mpQP.
Figure 4. Critical Regions with sampling time (a) Ts = 0.1 and (b) Ts = 0.05.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444635778500978
The Z-transform
Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019
10.6 State Variable Representation
Modern control theory uses the state-variable representation of systems, whether continuous- or discrete-time. In this section we introduce the discrete-time state-variable representation of systems. In many respects, it is very similar to the state-variable representation of continuous-time systems.
State variables are the memory of a system. In the discrete-time, just like in the continuous-time, knowing the state of a system at a present index n provides the necessary information from the past that together with present and future inputs allows us to calculate the present and future outputs of the system. The advantage of a state-variable representation over a transfer function is the inclusion of initial conditions in the analysis and the ability to use it in multiple-input and multiple-output systems.
Assume a discrete-time system is represented by a difference equation (which could have been obtained from an ordinary differential equation representing a continuous-time system) where is the input and the output:
(10.42)
where and the initial conditions of the system are . As in the continuous-time, the state-variable representation of a discrete-time system is not unique. To obtain a state-variable representation from the difference equation in (10.42) let
(10.43)
We then obtain from the difference equation the state-variable equations:
(10.44)
and the output equation
These state and output equations can then be written in a matrix form
(10.45)
by appropriately defining the matrices A and B, the vectors c and d as well as the state vector and the input vector .
Fig. 10.13 displays the block diagrams for a delay, a constant multiplier and an adder that are used in obtaining block diagrams for discrete-time system. Different from the continuous-time system representation, the representation of discrete-time systems does not require the equivalent of integrators, instead it uses delays.
Figure 10.13. Block diagrams of different components used to represent discrete-time systems (top to bottom): delay, constant multiplier and adder.
Example 10.27
A continuous-time system is represented by the ordinary differential equation
To convert this ordinary differential equation into a difference equation we approximate the derivatives by
Find the difference equation when and , and find the corresponding state and output equations.
Solution: Replacing the approximations of the derivatives for and letting we obtain the difference equation
which can be realized as indicated by the block diagram in Fig. 10.14. Because the second-order difference equation is realized using two unit delays, this realization is called minimal.
Figure 10.14. Block diagram of y[n]−y[n − 1]+(1/3)y[n − 2]=(1/3)x[n] displaying the state variables v 1[n]=y[n − 2] and v 2[n]=y[n − 1].
Letting the outputs of the delays be the state variables
we then have the following matrix equation for the state:
The output equation is then
The state variables are not unique. Indeed we can use an invertible transformation matrix F to define a new set of state variables
with a matrix representation for the new state variables and the output given by
Example 10.28
A discrete-time system is represented by the difference equation
where is the input and the output. Obtain a state-variable representation for it.
Solution: Notice that this difference equation has as well as in the input. The transfer function corresponding to the difference equation is
i.e., it is not a "constant-numerator" transfer function. A direct realization of the system in this case will not be minimal. Indeed, a block diagram obtained from the difference equation (see Fig. 10.15) shows that three delays are needed to represent this second-order system. It is important to realize that this representation despite being non-minimal is a valid representation. This is different from an analogous situation in the continuous-time representation where the input and its derivatives are present. Differentiators in the continuous-time representation are deemed not acceptable, while delays in the discrete-time representations are. □
Figure 10.15. Block diagram of y[n]−y[n − 1]−y[n − 2]=x[n]+x[n − 1] displaying the state variables v 1[n] and v 2[n], and corresponding to a non-minimal realization.
Solution of the State and Output Equations
The solution of the state-variable equations
can be obtained recursively:
where , the identity matrix. The complete solution is then obtained,
(10.46)
By definition of the state variables in Equation (10.43) the initial conditions of the state variables coincide with the initial conditions of the system,
(10.47)
Using the Z-transform we can obtain a close solution to the state and output equations. Indeed, if the state and output equations are in their matrix form
(10.48)
calling the Z-transforms of the state variables , for ; of the input , for , and of the output we have the following matrix expression for the Z-transforms of the state equations in (10.48):
using the Z-transform of , and is the vector of initial conditions of the state variables and I the unit matrix. Assuming that the inverse of exists, i.e., , we can solve for :
(10.49)
by expressing the inverse of a matrix in terms of its adjoint and determinant. We can then obtain the Z-transform of the output as
(10.50)
If the initial conditions are zero, and the input is , then we find that the transfer function is given by
(10.51)
Example 10.29
Consider the state-variable representation of the system in Example 10.27 with matrices:
Determine the transfer function of the system.
Solution: Instead of finding the inverse we can use Cramer's rule to find . Indeed, writing the state equations with zero initial conditions
according to Cramer's rule we have
and the Z-transform of the output is obtained after replacing the Z-transforms of the state variables:
Example 10.30
The transfer function of a LTI discrete-time system is
Obtain a minimal state-variable realization (i.e., a realization that only uses two delays, corresponding to the second-order system). Determine the initial conditions of the state variables in terms of initial values of the output.
Solution: The given transfer function is not "constant-numerator" transfer function and the numerator indicates the system has delayed inputs. Letting be the input and the output with Z-transforms and , respectively, we factor as follows:
so that we have the following equations:
The corresponding block diagram in Fig. 10.16 is a minimal realization as it only uses two delays. As shown in the figure, the state variables are defined as
Figure 10.16. Minimal realization of a system with transfer function H(z)=z −2/(1 − 0.5z −1 + 0.5z −2).
The state and output equations are in matrix form
Expressing the state variables in terms of the output:
we then have and . The reason for these initial conditions is that in the difference equation
the input is delayed two samples and so the initial conditions should be and . □
Example 10.31
Consider the state-variable representation of a system having the matrices
Determine the zero-input response of the system for arbitrary initial conditions .
Solution: The zero-input response in the time-domain is
requiring one to compute the matrix . To do so, consider
which can be verified by pre-multiplying by the above equation, giving
or that the infinite summation is the inverse of . In positive powers of z the Z-transform of is
Now, using the fact that for a matrix
provided that the determinant of the matrix, , we then have
We then need to determine the inverse Z-transform of the four entries to find . If we let
then
which is used in finding the zero-input response
Canonical Realizations
Just like in the continuous case, discrete-time state-variable realizations have different canonical forms. In this section we will illustrate the process for obtaining a parallel realization to demonstrate the similarity.
The state-variable parallel realization is obtained by realizing each of the partial fraction expansion components of a transfer function. Consider the case of simple poles of the system, so that the transfer function is
so that
The state variable for or would be so that
For the whole system we then have
and the output is
or in matrix form
In general, whenever the transfer function is factored in terms of first- and second-order systems, with real coefficients, a parallel realization is obtained by doing a partial fraction expansion in terms of first- and second-order components and realizing each of these components. The following example illustrates the procedure.
Example 10.32
Obtain a parallel state-variable realization from the transfer function
Show a block realization of the corresponding state and output equations.
Solution: If we obtain a partial fraction expansion of the form
where
and then
so that and . This allows us to write the output as
or for two difference equations
for which we need to obtain state variables. Minimal realizations can be obtained as follows. The first system is a "constant-numerator" system, in terms of negative powers of z, and a minimal realization is obtained directly. The transfer function of the second difference equation can be written
from which we obtain the equations
Notice that the realization of these equations only require two delays. If we let
we get
so that
The following script shows how to obtain the state-variable representation for the two components of using the function tf2ss. The second part of the script shows that by using the function ss2tf we can get back the two components of . Notice that these are the same functions used in the continuous case. Fig. 10.17 shows the block diagram of the parallel realization. □
Figure 10.17. Block diagram of a parallel state-variable realization of the system with H(z)=z 3/(z + 0.5)[(z−0.5)2 + 0.25]. Notice this is a minimum realization with three delays corresponding to the third-order system.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128142042000211
Impulsive MPC schemes for biomedical processes: Application to type 1 diabetes
Pablo S. Rivadeneira , ... A.H. González , in Control Applications for Biomedical Engineering Systems, 2020
2 Dynamic systems with short-duration inputs
In many biomedical applications, the medication is administered to the patient in a discrete way, in the form of small pulses or impulses. For instance, in the AP problem, the insulin can be injected to the patient in both ways.
If a fixed time period is assumed, then it can be argued that an insulin dose is administered by a zero-order hold which maintains their rate all the period long, and changes its value only at the beginning of a new periods. This is in fact what is made in most of the approaches, although this modeling of the input is not realistic since insulin pumps are not able to do that. Actually, insulin doses are administered by pulses that covers only a short interval of these periods. If these pulses are extremely short in comparison with the period, they approach impulses (i.e., pulses of infinitesimal duration). These latter schemes work, for instance, to represent diseases that are controlled by taking pills one or two times per day. However, in the AP, it is not so clear if the (probably fast) insulin injection is short enough to have a good representation by the impulsive approaches. In any case, having a general pulse representation that account for both the zero-order hold input and impulsive input as particular cases is desired.
Consider the following affine continuous-time system:
(1)
where x represents the state, u the control action, y the output of the system, and r the disturbances, with A, B u , C, and B r being the corresponding matrices, respectively, and E being a constant term. If the drug infusion u is injected to the system only at certain time instants given by kT (and quickly enough), where T is the fixed period, and , then it is possible to work under the impulsive system framework recently developed by Rivadeneira et al. (2018) (see Fig. 2 for a schematic plot). Formally, assume that the input is given by
Fig. 2. Impulsive scheme.
where δ(t) is a generalized function (or distribution), Dirac delta, a that fulfills δ(0) = ∞, δ(t) = 0 for any t ≠ 0, and
for all continuous, compactly supported functions g.
The solution of Eq. (1) at each periods T can be divided into two parts, the first one, describing the system in the period [kT, kT + ΔT], and the second one describing the system in the period (kT + ΔT, (k + 1)T), for a positive and arbitrary smallΔ, 0 < Δ < 1:
for t ∈ [kT, kT + Δ], and
for .
Now, if we consider the limits of this solution for Δ → 0, it follows that
(2)
where , and
for t ∈ (kT, (k + 1)T). This latter solution is the one corresponding to the continuous-time system
(3)
for the aforementioned period of time.
This way, merging Eqs. (2), (3) into a single model, the following impulsive system can be obtained:
(4)
where x(kT +) denotes the limits of x(t) when t approaches kT from the right. In Eq. (4), the latter equation describes the discontinuities that the impulsive input causes into the system, while the former describes the free response (only affected by the disturbance r) between the discontinuities.
Remark 1
Note that the affine impulsive system (4) has no formal equilibrium points. In fact, there is no triplet fulfilling the condition:
(5)
(6)
because the first equation has no solution, for (vanishing disturbance).An interpretation of the latter situation is that it is not possible to maintain the system in a given point by only applying impulsive inputs, even when this input is null. However, as it will be shown in the following section, it is possible to maintain the system (switching) inside a given region. The conditions for that are: (i) to find states before and after the discontinuity that, although different, remain constant and (ii) to ensure that the transient state trajectories between these states remain inside a given set.
2.1 Underlying discrete-time subsystem
A natural way to obtain a discretization (discrete-time system) of the continuous-time impulsive system (4) is by sampling it with a sampling time given by the period of the impulses, T. According to the results presented by Rivadeneira et al. (2015, 2018), the idea is to characterize two discrete-time subsystems describing the evolution of the states x(kT) and x(kT +); that is, the evolution of the states just before and after the discontinuities, respectively. If the disturbance r is assumed to be piece-wise constant, that is, r(t) = r(kT), for t ∈ [kT, (k + 1)T), , the subsystems are as follows:
(7)
(8)
where , , A • = A° = e AT , , and . Furthermore, the input and disturbance of each subsystem are such that u°(k + 1) = u •(k) = u(kT), r°(k) = r •(k) = r(kT), and , .
These two subsystems are not only useful to describe the evolution of the impulsive system (4) at the sampling times kT and kT +—which is necessary to implement an MPC—but also to characterize the equilibrium regions of Eq. (4) accounting for both the discontinuities and the free responses between them.
2.2 Extended equilibrium of the impulsive system
The idea now is to find an equilibrium set for Eq. (4), based on the underlying discrete-time subsystems (7), (8). To do that, we define first the equilibrium pairs of Eqs. (7), (8)— and , respectively—as the ones fulfilling the conditions b :
(9)
Given that u°(k + 1) = u •(k) by definition (and E° = E •), we have that at steady state it is . This means that one equilibrium input (defined as for simplicity) has to fulfill both equilibrium equations, that is, the last equation can be written as
(10)
This way, given that Eqs. (7), (8) are subsystems describing one single impulsive system, we need to find triplets fulfilling both Eqs. (9), (10). Furthermore, a final condition that the equilibrium triplet must fulfill to be a feasible extended equilibrium is that the free responses corresponding to them (orbits) must be feasible:
where
and (i.e., φ(⋅) it is the solution of the system (3), for an impulsive input of value ( ) and considering no disturbances).
Summarizing, we can characterize the extended equilibrium set, , as c :
(11)
Furthermore, the set of equilibrium inputs—that is, the set of inputs for which an equilibrium state exists—is denoted as . Note that this latter impulsive equilibrium is different from the input equilibrium defined in Section 4.2. For details in the procedure to compute such an equilibrium sets and , in a general context, see Rivadeneira et al. (2018).
2.3 Pulse input scheme
Here, a pulse input scheme (i.e., when the inputs are pulses of a given duration, instead of impulses) is introduced to be consistent with the real application of the control action that an actuator can supply in some biomedical cases, in which impulsive inputs are not allowed. See Fig. 3 for a schematic plot of the input pulses and state responses. To properly represents model (1) under the pulse scheme, some modifications need to be made on the underlying subsystems of Section 2.1. Specifically, matrix must be redefined as
Fig. 3. Pulse scheme.
where ΔT represents now the input pulse duration. Furthermore, to characterize the extended equilibrium, matrix takes the form
This way, the extended equilibrium sets and are different from the ones obtained for the impulsive case. Clearly, the latter representation is a generalization of the former, in such a way that the impulsive representation is a particular case of the pulsive one, when the pulse duration tends to zero.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128174616000032
Source: https://www.sciencedirect.com/topics/engineering/continuous-time-system
0 Response to "Recover Continuous time Linear System From Discrete time"
ارسال یک نظر