4  Time response of state space models

Updated

June 3, 2025

4.1 Solution to vector differential equations

Recall that for a scalar differential equation (i.e., x(t)R): x˙(t)=ax(t),initial condition x(0)=x0 has the solution x(t)=eatx0.

Can we say the same for vector systems? That is, can we define matrix exponential eAt in such a way that the vector differential equation (i.e., x(t)Rn): x˙(t)=Ax(t),initial condition x(0)=x0 where ARn×n has the solution x(t)=eAtx0?

Mathematically, it turns out that this is straight forward. Recall that for a scalar a, we have eat=1+at+(at)22!+(at)33!+ So, a natural choice is to define matrix exponential as eAt=I+At+(At)22!+(At)33!+=I+At+A2t22!+A3t33!+ where the second equation uses the fact that (At)2=AtAt=A2t2 because t is a scalar.

With this definition, we have ddteAt=A+A2t+A3t22!+=A[I+At+A2t22!+A3t33!+]=AeAt.

Thus, if we take the candidate solution x(t)=eAtx0, we have that ddtx(t)=[ddteAt]x0=AeAtx0=Ax(t). Thus, our candidate solution satisfies the vector differential equation!

If we define eAt as (1)eAt=I+At+A2t22!+A3t33!+

Then, we can write the solution of the vector differential equation (i.e., x(t)Rn) x˙(t)=Ax(t),initial condition x(0)=x0 where ARn×n as x(t)=eAtx0.

We now present some examples where the matrix exponential can be computed easily.

Example 4.1 Compute eAt for A=[0100].

Based on the solution, solve the vector differential equation x˙(t)=Ax(t),x0=[α1α2].

We compute the terms of (1):

  • A2=[0000].

which means that Ak=0 for k2. Thus, the right hand side of (1) contains only a finite number of nonzero terms: eAt=I+At=[1001]+[0t00]=[1t01].

Thus, the solution of the vector differential equation x˙(t)=Ax(t) is given by x(t)=eAtx0=[1t01][α1α2]=[α1+α2tα2].

Example 4.2 Compute eAt for A=[010001000].

Based on the solution, solve the vector differential equation x˙(t)=Ax(t),x0=[α1α2α3].

We compute the terms of (1):

  • A2=[001000000].
  • A3=[000000000].

which means that Ak=0 for k3. Thus, the right hand side of (1) contains only a finite number of nonzero terms: eAt=I+At+12A2t2=[100010001]+[0t000t000]+12[00t2000000]=[1t12t201t001].

Thus, the solution of the vector differential equation x˙(t)=Ax(t) is given by x(t)=eAtx0=[1t12t201t001][α1α2α3]=[α1+α2t+12α32α2+α3tα3].

Example 4.3 Compute eAt for A=[λ1000λ2000λ3].

Based on the solution, solve the vector differential equation x˙(t)=Ax(t),x0=[α1α2α3].

We compute the terms of (1):

  • A2=[λ12000λ22000λ32].
  • A3=[λ13000λ23000λ33].
  • and so on.

Thus, we have eAt=I+At+A2t22!+A3t33!+=[100010001]+[λ1t000λ2t000λ3t]+12[λ12t2000λ22t2000λ32t2]+=[1+λ1t+12λ12t2+0001+λ2t+12λ22t2+0001+λ3t+12λ32t2+]=[eλ1t000eλ2t000eλ3t]

Thus, the solution of the vector differential equation x˙(t)=Ax(t) is given by x(t)=eAtx0=[eλ1t000eλ2t000eλ3t][α1α2α3]=[α1eλ1tα2eλ2tα3eλ3t].

But outside of such few special cases, computing eAt via definition (1) is not computationally feasible. We now present computationally efficient methods to compute the matrix exponential.

4.2 Computing matrix exponential

4.2.1 Method 1: Eigenvalue diagonalization method

As illustrated by , compute matrix exponential is easy for diagonal matrix. So, if the matrix A is diagonalizable (i.e., has distinct eigenvalues) we can do a change of coordinates and compute the matrix exponential in the eigen-coordinates.

In particular, suppose A has distinct eivenvalues λ1,,λn and v1,,vn are the corresponding eigvenvectors. Thus, Avk=λkvk for all k{1,,n}. Writing this in matrix form, we have A[v1v2vn]=[Av1Av2Avn]=[λ1v1λ2v2λnvn]=[v1v2vn][λ1000λ20000λn] Now define T=[v1v2vn]andΛ=[λ1000λ20000λn]. So, the above equation can be writen as AT=TΛ or T1AT=Λ Observe that

  • A=TΛT1
  • A2=TΛ2T1
  • A3=TΛ3T1
  • and so on.

Therefore, eAt=I+At+A2t22!+A3t33!+=I+TΛT1t+TΛ2T1t22!+TΛ3T1t33!+=T(I+Λt+Λ2t22!+Λ3t33!+)T1=TeΛtT1

Thus, once we know the eigenvalues and eigenvectors of A (and if all eigenvalues are distinct), then eAt=TeΛtT1

Exercise 4.1 Use the eigenvalue diagonalizable method to compute eAt for A=[6453].

We start by computing the eigenvalues and eigenvectors of A.

To compute the eigenvalues: sIA=[s+645s3] Therefore, the characteristic equation is det(sIA)=s2+3s+2=(s+1)(s+2) Hence, the eigenvalues of A are λ1=1 and λ2=2.

We now compute the eigenvectors. Recall that for any eigenvalue λ, the eigen-vector satisfies (λIA)v=0. We start wtih λ1=1. Then, [5454][v11v12]=0 We set v11=4. Then, v12=5. Thus, the eigenvector v1=[45].

Similarly, for λ2=2, we have [4455][v21v22]=0 We set v21=1. Then, v22=1. Thus, the eigenvector v2=[11].

Thus, T=[4151] and therefore T1=[1154]. Hence, eAt=[4151][et00e2t][1154]=[4et+5e2t4et4e2t5et+5e2t5et4e2t]

Exercise 4.2 Use the eigenvalue diagonalizable method to compute eAt for A=[1214].

We start by computing the eigenvalues and eigenvectors of A.

To compute the eigenvalues: sIA=[s121s4] Therefore, the characteristic equation is det(sIA)=s25s+6=(s2)(s3). Hence, the eigenvalues of A are λ1=2 and λ2=3.

We now compute the eigenvectors. Recall that for any eigenvalue λ, the eigen-vector satisfies (λIA)v=0. We start wtih λ1=2. Then, [1212][v11v12]=0 We set v11=2. Then, v12=1. Thus, the eigenvector v1=[21].

Similarly, for λ2=3, we have [2211][v21v22]=0 We set v21=1. Then, v22=1. Thus, the eigenvector v2=[11].

Thus, T=[2111] and therefore T1=[1112]. Hence, eAt=[2111][e2t00e3t][1112]=[2e2te3t2e2t+2e3te2te3te2t+2e3t]

4.2.2 Method 2: Laplace transform method

Recall that we have x˙(t)=Ax(t). Taking (unilateral) Laplace transforms of both sides gives sX(s)x0=AX(s). Rearranging terms, we get (sIA)X(s)=x0X(s)=(sIA)1x0. Taking inverse Laplace transforms, we get x(t)=L1((sIA)1)x0. Comparing it with the solution obtained earlier, we get eAt=L1((sIA)1). In the above, we interpret the inverse Laplace transform of a matrix to mean the inverse Laplace transform of each entry.

Exercise 4.3 Use the Laplace transform method to compute eAt for A=[6453].

We first compute sIA=[s+645s3] Therefore, det(sIA)=s2+3s+2=(s+1)(s+2) and (sIA)1=[s3(s+1)(s+2)4(s+1)(s+2)5(s+1)(s+2)s+6(s+1)(s+2)] We now do partial fraction expansion of each term: (sIA)1=[4s+1+5s+24s+14s+25s+1+5s+25s+14s+2] Taking the inverse Laplace transform of each term, we get eAt=L1(sIA)1=[4et+5e2t4et4e2t5et+5e2t5et4e2t]

4.3 Internal stability

As stated in the beginning of the lecture, matrix exponential allows us to solve the vector differential equation x˙(t)=Ax(t),initial condition x(0)=x0 in the same manner as a scalar linear differential equation. The solution is given by x(t)=eAtx0.

Suppose A has distinct eigenvalues (λ1,,λn) with corresponding (linearly independent) eigenvalues (v1,,vn).

Recall that for any eigenvector vj, Akvj=λjkvj. Thus, if the system starts from the intial condition x0=vj, then x(t)=eAtx0=eAtvj=[I+At+A2t22!+A3t33!+]vj=[vj+λjtvj+λj2t22!vj+λj3t33!vj+]=eλjtvj Therefore, if we start along an eigenvector of A, the system trajectory x(t) remains along that direction, with the length scaling as eλjt.

In general, since the eigenvectors (v1,,vn) are linearly independent, an arbitrary initial condition x0 can be written as x0=α1v1++αnvn. Therefore, x(t)=eAtx0=eAt[α1v1++αnvn]=α1eAtv1++αneAtvn=α1eλ1tv1++αneλntvn

Thus, the response of the dynamical system x˙(t)=Ax(t) is a combination of motions along the eigenvectors of the matrix A. Each eigendirection is called a mode of the system. A particular mode is excited by choosing an initial condition to have a component αi along that eigendirection.

An implication of the above is the following: the state x(t)0 as t for all initial states x0 if and only if all eigenvalues of A lie in the open left-hand plane.

A SSM where all eigenvalues of the A matrix lie in the open-left hand plane is called internally stable.

Internal stability vs BIBO stability

The TF of a SSM is given by G(s)=C(sIA)1B=Cadj(sIA)Bdet(sIA). The elements of adjoint of (sIA) are all polynomials in s. Thus, the numerator is a polynomial in s. So, is the denominator. Moreover, the denominator equals the characteristic equation of A; thus, the roots of the denominator are the eigenvalues of A.

The numerator and denominator may have common roots that cancel each other. So, in general, {eigenvalues of A}{poles of G(s)}. Hence, if A is internally stable (i.e., all its eigenvalues are in the open left-hand plane) then G(s) is BIBO stable (i.e., all its poles are in the open left-hand plane).

However, the converse is not true there may be pole-zero cancellations. For example, consider A=[1002], B=[10], C=[10]. Then, G(s)=s2(s+1)(s2)=1s+1. Notice that the TF is BIBO stable but the SSM is not internally stable! Any initial condition that activates the mode corresponding to eigenvalue 2 will cause x(t) to diverge to infinity.

4.4 Time response of state space models

Now consider a SSM given by x˙(t)=Ax(t)+Bu(t)y(t)=Cx(t)

Suppose the system starts at t=0 with an initial state x(0)=x0 and we apply the input u(t). How do we find the output?

Taking Laplace transform of the SSM, we get: sX(s)x0=AX(s)+BU(s)Y(s)=CX(s) Solving for X(s), we get X(s)=(sIA)1x0+(sIA)1BU(s). Substituting in Y(s)=CX(s), we get Y(s)=C(sIA)1x0zero-input response+C(sIA)1BU(s)zero-state response We can the use the above expression compute y(t) by taking the inverse Laplace transform.

It is sometimes useful to write the expression in time domain (but we will not use this expression for computations). To do so, recall that C(sIA)1B is the transfer function G(s) of the system. Therefore, its inverse Laplace transform is the impulse response: g(t)=CeAtB. Then, we can compute the inverse Laplace transform of G(s)U(s) using the convolution formula and write y(t)=CeAtx0zero-input response+0tCeA(tτ)Bu(τ)dτzero-state response

close all nutshells